What We’re Reading (Week Ending 14 December 2025) - 14 Dec 2025
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 14 December 2025):
1. When Mountains Become Cages: Lessons from the Sichuan Basin – Eugene Ng
The Sichuan Basin (四川盆地) is surrounded by mountains on all sides and is drained by the upper Yangtze River and its tributaries. The basin is anchored by Chengdu, the capital of Sichuan province, in the west, with the Chengdu Plain and Chongqing in the east…
…The Tibetan Plateau contains the headwaters of most of the streams and rivers in its surrounding regions. This includes the three longest rivers in Asia (the Yellow River, the Yangtze River, and the Mekong River).
The upper tributaries of the Yangtze River (长江 or 扬子江) flow through the Sichuan Basin, providing water for irrigation to grow crops, and for civilisation…
…Because of its relative flatness and fertile soils, the Sichuan Basin can support a high population density, providing staples such as rice, wheat, and barley…
…The Sichuan Basin was the strategic fortress that shaped the Three Kingdoms era (220-280 AD), following the collapse of the Han Dynasty. Wei (in the north) was led by Cao Cao, his son Cao Pi, and strategist Sima Yi. Shu Han (in the southwest) was led by Liu Bei, with strategist Zhuge Liang, and warriors Guan Yu, Zhang Fei, and Zhao Yun. Wu (in the southeast) was led by Sun Quan, strategist, Zhou Yu, and Sun Ce.
Surrounded by mountains and accessed through treacherous gorges, Sichuan was nature’s citadel. Easy to defend, nearly impossible to invade. Emperor Liu Bei built his entire kingdom in Sichuan. When he lost the battle for central China, Sichuan became his refuge and his power base.
However, the Sichuan Basin was both a blessing and a curse. It kept Shu Han alive for decades against stronger rivals, but the same isolation made it nearly impossible to project power outward after decades of failed northern campaigns.
The same mountains that kept enemies out also kept Shu Han’s armies in. Zhuge Liang launched five major northern expeditions against Wei, and all sputtered out for the same core reasons:
- Geography was brutal. To attack Wei, Shu had to march through mountain passes and supply armies across hostile terrain. Wei just had to defend chokepoints. Offense is always harder; offense uphill through mountains is nearly impossible.
- Economics didn’t add up. Shu was the smallest, poorest kingdom—one province against Wei’s nine. Every campaign drained resources Shu couldn’t replenish. Wei could lose battles and recover; Shu couldn’t afford to lose anything.
- Talent ran thin. Zhuge Liang was brilliant, but he couldn’t be everywhere. When he died in 234 AD, Shu’s brain died with him. Wei had depth; Shu had dependence.
- Strategic logic was flawed. The campaigns weren’t really about conquering Wei—they were about survival through offense, keeping Wei preoccupied so they wouldn’t invade Shu. Defense disguised as attack. It bought time but burned treasure…
…That is why Shu Han, despite having brilliant strategists like Zhuge Liang, could never quite break through to challenge Wei’s dominance in the heartland of the North China plains (华北平原). They were trying to play offense from the strongest defensive position in China…
…Shu Han’s mountains kept enemies out but armies in. Companies build defensive moats: loyal customers, proprietary technology, high switching costs, and then discover that those same moats prevent them from expanding into new markets. The thing that protects them eventually confines them. Ask BlackBerry how their keyboard moat worked out. Ask Intel if their x86 architecture saved them from irrelevance. Defense becomes offense becomes history…
…The North China Plain birthed Chinese civilization because the flat land, water, and soil aligned. In investing, today’s geography is market size, secular tailwinds, and competitive position. Invest in businesses riding massive currents, the Yangtze Rivers of commerce, not isolated mountain kingdoms. Find the disruptors and top dogs commanding vast plains of opportunity (i.e., large total addressable markets), where continued expansion is possible, and resources flow abundantly. The best investments are not defensive fortresses. They are empires with still abundant room to build and grow…
…The Yangtze River still flows through Sichuan. The mountains still stand. But Shu Han is gone. Geography endures. Dynasties do not. Companies do not last forever. Similarly, management does not, as they have to pass the torch on.
Niche businesses prosper, then calcify, then fade. Without access to vast markets, even genius becomes a footnote. The question is not whether you are smart. It is whether your terrain allows for growth or just survival.
2. Horses – Andy Jones
Engines, steam engines, were invented in 1700.
And what followed was 200 years of steady improvement, with engines getting 20% better a decade.
For the first 120 years of that steady improvement, horses didn’t notice at all.
Then, between 1930 and 1950, 90% of the horses in the US disappeared…
…I was one of the first researchers hired at Anthropic.
This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
Then in December, Claude finally got good enough to answer some of those questions for us.
In December, it was some of those questions. Six months later, 80% of the questions I’d been being asked had disappeared.
Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did…
…But while it took horses decades to be overcome, and chess masters years, it took me all of six months to be surpassed.
Surpassed by a system that costs one thousand times less than I do.
A system that costs less, per word thought or written, than it’d cost to hire the cheapest human labor on the face of the planet.
And so I find myself thinking a lot about horses, nowadays.
3. Energy Predictions 2025 – Casey Handmer
In 2025, headlines scream that datacenters are pushing prices up and consuming all the power. I think datacenters are exposing the rot in a moribund power generation and delivery industry which has proven unable to meet demand in recent years. But it is a moot point.
Datacenters are already building their own captive power plants. As AI demand outstrips production of gas turbines, hyperscalers will turn to offgrid solar+battery power systems, which are already competitive with pure gas or gas+solar in the sunnier parts of Earth.
Depending on location, 10x overbuild of solar and batteries are sufficient to hit >99.5% uptime for the GPUs…
…On the flip side, these captive solar power plants will be curtailing approximately 75% of their generated power and will be able to provide net power on all but a few days per year. That is, 99% of the time, which is substantially higher utilization than any conventional thermal power plant.
Within the next five years, market power between utilities and datacenters will flip, with DCs becoming the preferred load growth power generation partner.
To spell out the implications, this means that consumers will get access to extremely competitive (cheap) power most of the time, and some combination of utility-owned and privately owned batteries will be needed to smooth out the gaps, as they would be anyway…
…If SpaceX or a competitor can ship inference compute to a 560 km unshaded sun-synchronous orbit which is 80% 1 kg/m^2 solar arrays by mass and 80% compute by cost, then it should be possible to make money. Otherwise, we can expect to see compute being developed on the ground…
…At Terraform Industries, we’re pioneering the technology to convert cheap solar power, air, and water into synthetic natural gas and other hydrocarbons. Within the next five years, solar cost reductions will drive our process to be cost-preferred in all hydrocarbon import markets, and geological sources of oil and gas will never again be able to compete. Our grandchildren will be swimming in copious cheap energy and wondering what all that drilling was for.
We believe that the path forward is lime-calcite captured CO2 + electrolyzed H2 to make CH4 and CH3OH (methanol). Methanol can be upgraded via a wide variety of existing petrochemical processes to make DME, ethylene, propane, gasoline, kerosene, and almost anything else you can imagine…
…In 2025, most gas is used for electricity generation, while most oil is used for cars, trucks, ships, and aircraft.
Solar is going to continue to displace all other primary electricity generators. And electric cars and trucks will continue to dominate growth in ground transportation.
By 2045, natural gas will be used as LNG primarily for high performance supersonic aviation, shipping, and industrial heat.
Methanol will be used as the universal industrial chemical precursor for plastics, paints, fertilizers, adhesives, as well as specialty fuels. Kerosene will service the legacy aviation fleet. Internal combustion piston engines will ultimately go the way of the piston steam engine…
…They don’t want you to know this, but rocks are made of metal oxides, and infinitely abundant commonly occurring rocks such as basalt contain basically every metal you could ever want.
With sufficiently cheap power, we no longer need to travel to the ends of the Earth to build mines. Instead, build a solar powered rock refinery at your local gravel pit…
…But much of the coast of Australia, Chile, Peru, Namibia, South Africa, Mexico, Saudi Arabia and other gulf states have essentially infinite quantities of cheap land, free solar power, and sea water. Democratized solar desalination technology can turn any and all these areas into arbitrarily lush paradises with <1% of the available land under solar arrays.
4. Why AGI Will Not Happen – Tim Dettmers
One of the most common misconceptions I see is that people assume hardware keeps improving and improving. This is an important misconception that explains a lot of the poor thinking around AI progress. The efficiency of GPUs has driven almost all innovation in AI. AlexNet was only possible by developing one of the first CUDA implementations that could compute convolutions over networked GPUs. Further innovation was mostly possible through improved GPUs and using more GPUs. Almost everybody sees this pattern — GPUs improve, AI performance improves — and it is easy to think that GPUs will improve further and will continue to improve AI outcomes. Every generation of GPUs has been better, and it would seem foolish to think that it will stop. But actually, it is foolish to think that GPUs will continue to improve. In fact, GPUs will no longer improve meaningfully. We have essentially seen the last generation of significant GPU improvements. GPUs maxed out in performance per cost around 2018 — after that, we added one-off features that exhaust quickly.
The first of these one-off features was 16-bit precision, then Tensor Cores, or the equivalent, then high-bandwidth memory (HBM),then the TMA or equivalent, then 8-bit precision, then 4-bit precision. And now we are at the end, both in the physical and the idea space. I have shown in my paper about k-bit inference scaling laws what data types with particular block sizes and computational arrangements are optimal. This has already been adopted by hardware manufacturers. Any further improvement will lead not to straightforward improvements but to trade-offs: either better memory footprint at lower computational efficiency or higher computational throughput at higher memory footprint. Even if you can innovate – linear improvements, need exponential resources – further improvements will be trivial and will not add any meaningful advancement.
While GPUs can no longer improve meaningfully, rack-level optimizations are still critically important. Efficient shuttling of key-value caches is one of the most important problems in AI infrastructure. The current solution to this problem, however, is also relatively straightforward. Companies like OpenAI boast about their AI infrastructure, but it is relatively simple to design because there is essentially only one optimal way to design it. And while it is complex to implement, it just needs clear thinking and mostly hard, time-intensive engineering. But the overall system design is not particularly novel. OpenAI – or other frontier labs – have no fundamental advantage in their inference and infrastructure stacks. The only way to gain an advantage is by having slightly better rack-level hardware optimizations or data-center-level hardware optimizations. But these will also run out quickly – maybe 2026, maybe 2027…
…I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs. That would not be a problem on its own, but it sets a clear physical limit on scaling that is rapidly approaching. We have maybe one, maybe two more years of scaling left because further improvements become physically infeasible. The scaling improvements in 2025 were not impressive. Scaling in 2026 and 2027 better work out better.
Despite these exponential costs, the current infrastructure build-out is reasonable, particularly with the growth of inference use, but it still creates a very precarious balance. The biggest problem is this: if scaling does not provide much larger improvements than research/software innovations, then hardware becomes a liability and not an asset…
…The key value of AI is that it is useful and increases productivity. That makes it beneficial. It is clear that, similarly to computers or the internet, AI will be used everywhere. The problem is that if AI were just used for coding and engineering, it would have a very limited impact. While a lot of economic activity is supported by digital programs, these also have diminishing returns, and producing more software will not improve outcomes significantly if existing software is already good enough (just look at the SAAS failure in China). This makes wide-spread economic integration absolutely vital for AI effectiveness.
So in order to provide real value, AI needs to be used in ways that provide new benefits, not just improvements to what already exists. This is a difficult problem, but the right answer is to integrate AI into everything to squeeze out non-linear improvements, see what works and what does not, then keep what is working. China is taking this approach by subsidizing applications that use AI to encourage adoption. The Chinese population is very receptive to innovation, which facilitates this process. It is nothing unusual in China to see an 80-year-old grandma use AI to help her with their daily life. The US, on the other hand, bets on ideas like AGI and superintelligence, which I believe are fundamentally flawed concepts that have little relevance to future AI progress. This becomes clear when you think carefully about what these terms actually mean in physical reality…
…The concept of superintelligence is built on a flawed premise. The idea is that once you have an intelligence that is as good or better than humans — in other words, AGI — then that intelligence can improve itself, leading to a runaway effect. This idea comes from Oxford-based philosophers who brought these concepts to the Bay Area. It is a deeply flawed idea that is harmful for the field. The main flaw is that this idea treats intelligence as purely abstract and not grounded in physical reality. To improve any system, you need resources. And even if a superintelligence uses these resources more effectively than humans to improve itself, it is still bound by the scaling of improvements I mentioned before — linear improvements need exponential resources. Diminishing returns can be avoided by switching to more independent problems – like adding one-off features to GPUs – but these quickly hit their own diminishing returns. So, superintelligence can be thought of as filling gaps in capability, not extending the frontier. Filling gaps can be useful, but it does not lead to runaway effects — it leads to incremental improvements.
5. The cure for FOMO is…time – Josh Brown
Strategy, formerly known as Microstrategy. This is a publicly traded company that once sold software but now serves as the largest publicly traded “digital asset trust” or DAT. It created and defines the category. For those who haven’t been paying close attention, the idea behind these stocks is that the company sets out to accumulate as much of a crypto asset as it can (in the case of Strategy they’re buying Bitcoin) and the shareholders benefit as the underlying asset (BTC) appreciates. Why not just buy the asset itself or a spot price ETF? Because the digital asset treasury is accumulating the asset at a faster pace using the money it raises via taking on debt or secondary stock sales or preferred stock sales or all three at once.
MicroStrategy currently holds roughly 649,870 bitcoin, acquired at a total purchase cost of about $48.37 billion, which works out to an average price of approximately $74,433 per BTC. Based on the fixed 21 million-coin bitcoin supply, the company controls about 3.0%–3.1% of all bitcoin that will ever exist. Saylor’s going to continue to dilute his shareholders in his quest to accumulate even more of it so, the thinking goes, if you are bullish on the Bitcoin asset itself, you buy his stock and take the ride to even faster gains than you would otherwise get with the ETFs. In this way, he has convinced the faithful that dilution is actually good, not bad. It’s helping the cause.
I never could wrap my head around it. I get the theory, I think, but it hasn’t clicked in terms of why it would work. Maybe this is because I don’t have a mental price target of $1 million per Bitcoin or something like that. I don’t know. I sold all my Bitcoin and bought the BlackRock ETF IBIT a while back to replace it and that’s pretty much the extent of my involvement in the asset class. The appeal of Microstrategy as an investment is mystifying to me still.
But, I must confess, for a long while I was wondering what was wrong with me. Was I missing something? Was there some aspect to this I wasn’t getting? My uncertainty stemmed from the performance of the stock, which was stratospheric…
…Between August 10th, 2020 and last Thanksgiving, MSTR returned 3,050%. An investment of $10,000 would have become worth over $300,000. No other publicly traded company I can find did anything even close to that in the same timeframe. Nvidia, for example, merely 10x’d in the period.
On Wall Street, price is validation, even if price is only temporary. Saylor was validated for the time being. He knew what he was talking about. After all, millions of investors had agreed with him and those who did not had been rendered wrong by what Jeffrey Gundlach often refers to as “the bloodless verdict of the market.” I was dumbfounded…
…And then a funny thing happened. Time went by. Things changed. We got a dozen ETFs listed that could serve the same purpose MSTR had served for the stock market investor – a way to own Bitcoin exposure in a traditional brokerage account. Additionally, Fidelity and Schwab, Robinhood and Public, all became legitimate venues in which to buy, sell and hold the underlying asset. This was a tremendous unlock. Where once MSTR was the only game in town, now there were many options, none of which required people to pay a premium or remember a seed phrase or transact with Coinbase or get involved with cold storage wallets and the like. Bitcoin became as accessible as running water, everywhere and to everyone. Even in an IRA. That was the beginning of the reckoning for investors in MSTR. One year later and we see the result…
…Warren Buffett once famously said the stock market is not a game where the guy with the 160 IQ beats the guy with the 130 IQ every time. He says temperament is much more important than intelligence. Temperament keeps you from acting on impulse. It’s an innate sense that things might look different in the future than they do today. The cure for FOMO doesn’t come in a can or a bottle or a box. Sometimes it pays to just stick around awhile and watch.
The cure is time.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.