What We’re Reading (Week Ending 04 January 2026) - 04 Jan 2026
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 04 January 2026):
1. Authenticity after abundance – Adam Mosseri
Everything that made creators matter—the ability to be real, to connect, to have a voice that couldn’t be faked—is now suddenly accessible to anyone with the right tools. Deepfakes are getting better and better. AI is generating photographs and videos indistinguishable from captured media. The feeds are starting to fill up with synthetic everything…
…We are now seeing an abundance of AI generated content, and there will be much more content created by AI than captured by traditional means in a few years time. We like to talk about “AI slop,” but there is a lot of amazing AI content that thankfully lacks the disturbing properties of twisted limbs and absent physics. Even the quality AI content has a look though: it tends to feel fabricated somehow. The imagery today is too slick, people’s skin is too smooth. That will change; we are going to start to see more and more realistic AI content.
Authenticity is fast becoming a scarce resource, which will in turn drive more demand for creator content, not less. The creators who succeed will be those who figure out how to maintain their authenticity whether or not they adopt new technologies. That’s harder now—not easier—because everyone can simulate authenticity. The bar is going to shift from “can you create?” to “can you make something that only you could create?” That’s the new gate…
…But flattering imagery is cheap to produce and boring to consume. People want content that feels real. We are going to see a significant acceleration of a more raw aesthetic over the next few years. Savvy creators are going to lean into explicitly unproduced and unflattering images of themselves…
…Social media platforms are going to come under increasing pressure to identify and label AI-generated content as such. All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality. There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media. Camera manufacturers could cryptographically sign images at capture, creating a chain of custody…
…In a world of infinite abundance and infinite doubt, the creators who can maintain trust and signal authenticity—by being real, transparent, and consistent—will stand out.
As for Instagram, we’re going to have to evolve in a number of ways, and fast. We need to build the best creative tools, AI-driven and traditional, for creators so that they can compete with content fully created by AI. We need to label AI-generated content clearly, and work with manufacturers to verify authenticity at capture—fingerprinting real media, not just chasing fake. We need to surface credibility signals about who’s posting so people can decide who to trust. And we’re going to need to continue to improve ranking for originality.
2. 2025’s biggest investing lesson – slow down – Chin Hui Leong
HERE Is the uncomfortable truth about 2025: The year’s biggest wealth destroyer was not tariffs, AI disruption, or interest rate uncertainty.
It was speed…
…At the start of 2025, traders reacted swiftly to every hint about interest rate movements.
A strong jobs report? Sell immediately – fewer rate cuts ahead.
A weak inflation print? Buy before everyone else does.
This behaviour assumed that being first to interpret the data would translate into superior returns.
Let us test that theory with 2024’s track record. Goldman Sachs predicted five rate cuts. We got three.
Traders priced in a 73 per cent chance of a March 2025 cut. The first cut came in September, six months later. The market expected 1.5 percentage points of cuts. We got one.
In other words, the number of cuts was wrong, the timing was off, and the size of the cuts were lower than expected.
Yet despite these spectacular misses, the S&P 500 rose more than 23 per cent in 2024.
The lesson: You can be completely wrong about interest rates and still do well in the market – if you stay invested.
The investors who traded every data point, trying to front-run the US Federal Reserve, generated fees and anxiety.
The investors who ignored the noise generated returns…
… In his book Your Money and Your Brain, author Jason Zweig explains that our minds recognise patterns even when none exist.
But this is the kicker: We cannot switch this mechanism off at will…
…Consider this: Every major decline in 2025 was accompanied by an avalanche of negative headlines – detailed articles on what went wrong, podcasts dissecting the damage, and social media hot takes piling on.
Amid that onslaught, any good news was buried.
The investors who reacted to the noise sold at lows. The investors who waited for the noise to clear bought those shares from them.
Speed did not protect portfolios. Patience did.
3. Running Out of Runway – Poe Zhao
Last week’s dual IPO filings from Zhipu AI and MiniMax reveal a paradox at the heart of China’s AI model market. Both companies have proven they can build competitive technology. Both have validated their business models at the unit economics level. Both are running out of time…
…Zhipu grew from ¥57 million in 2022 to ¥312 million in 2024, a 130% compound annual growth rate. MiniMax achieved even more dramatic expansion, with revenue surging 782% to $30.5 million in 2024. In the first nine months of 2025, MiniMax generated $53.4 million, already exceeding its full-year 2024 results.
But losses grew faster. Zhipu’s adjusted net loss exploded from ¥97 million in 2022 to ¥2.47 billion in 2024. That’s 20x growth. MiniMax went from $7.37 million in losses in 2022 to $465 million in 2024.
The cash burn is brutal. Zhipu: ¥300 million monthly. MiniMax: ¥2 billion monthly. Zhipu’s mid-2025 reserves stood at ¥2.55 billion. Do the math. Six months later, both companies rushed to file IPOs. The December timing was necessity, not choice…
…Research and development consumed ¥2.2 billion of Zhipu’s budget in 2024. That’s a 26x increase from the ¥84 million spent in 2022. Within that R&D figure, ¥1.55 billion went directly to compute services. Computing infrastructure alone ate 70% of the entire R&D budget.
MiniMax shows better cost discipline but faces the same fundamental pressure. Training-related cloud computing costs reached $142 million in the first nine months of 2025. The company has managed to improve efficiency. The ratio of training costs to revenue dropped from 1,365% in 2023 to 266% in the first three quarters of 2025. But even at 266%, you’re spending nearly $3 on training for every $1 of revenue.
This creates the first paradox. At the transaction level, these businesses are profitable. Sell an API call or a subscription, you make money. Scale that up, you should make more money. But scaling requires maintaining competitive model quality. Competitive model quality requires constant compute investment. The compute investment grows faster than revenue. The more you sell, the more you lose…
…China’s entire large language model market totaled ¥5.3 billion in 2024, according to Zhipu’s prospectus. Enterprise customers contributed ¥4.7 billion of that. Individual consumers accounted for just ¥600 million.
Do the math. Zhipu burns ¥300 million monthly. MiniMax burns ¥2 billion monthly. Combined, that’s ¥2.3 billion per month. Annualize it and you get ¥27.6 billion. The two companies alone are burning through more than five times the entire current market size annually. And they’re not alone. Multiple other companies compete in the same space…
…Zhipu bet on scale. The company invested heavily in frontier model development. R&D spending jumped from ¥529 million in 2023 to ¥2.2 billion in 2024. Compute infrastructure dominated that budget. The strategy assumes that leading-edge capabilities justify the burn rate. Stay at the frontier, win the highest-value customers, eventually reach economies of scale.
MiniMax took the efficiency route. The company’s prospectus explicitly positions itself as capital-efficient. Cumulative spending from founding through September 2025 totaled approximately $500 million. The prospectus contrasts this with OpenAI’s estimated $40–55 billion in cumulative investment. That’s a 100x cost difference for comparable multimodal capabilities…
…This reveals what makes the situation structural rather than cyclical. Your strategy becomes irrelevant when competitive dynamics dictate behavior. Zhipu chose scale. MiniMax chose efficiency. DeepSeek’s emergence forced both to spend more regardless of their chosen path. In a true market, companies can differentiate on cost, quality, or features. In this market, everyone must match the pace of iteration or become obsolete. The pace keeps accelerating. The costs keep compounding.
4. Why We Worry – Part I – Fawkes Capital
This year alone, Google will spend roughly $60 billion more in annualised capex than it did before ChatGPT launched. Since late 2022, the company has deployed an additional $85 billion in cumulative capex on AI-related development. With similar spending levels expected next year, Google’s capex now exceeds its net profit – a sharp departure from pre-AI years, when capex represented only about 25% of profit…
…What is Google receiving in return for this extraordinary level of investment? At present, Google processes roughly 1.4 quadrillion AI tokens per month. If we make a simplifying assumption and apply Google’s API input pricing across all of those tokens, the result is an additional $21 billion of annualised revenue.
For context, this is not an especially compelling trade-off: $85 billion of incremental capex for $21 billion of low-margin revenue. In effect, Google is deploying vast sums of capital for what amounts to a modest 5% uplift in annual revenue, and materially lower returns than its core search and advertising franchise generates. A major outlay for just a 5% uplift in annual revenues doesn’t sound like a great use of capital to us.
And if this is the underlying economic reality for the industry leader, it is difficult to see how outcomes will be more favourable for its competitors. Over time, we doubt that the return on capital employed (ROCE) from datacentres will meaningfully improve from today’s levels…
…If Big Tech and data centre operators collectively spend around $400 billion on AI infrastructure in 2026, then, by our estimate, at least $80 billion in annual net income would need to be generated to justify that investment. The hurdle is high because processors, which make up the bulk of capex, have a useful life of only about five years. Back-solving this requirement implies that something like 333 million paying users of ChatGPT – roughly the entire US population – would be needed to support such economics.
Today, the numbers fall drastically short. Only around 5% of users (about 20 million people) pay for ChatGPT, and both paid and non-paid user growth has begun to stall in recent months. OpenAI’s attempt to introduce advertising as a revenue stream has met fierce consumer resistance. And unlike Google, directing users to websites does not generate economic value for OpenAI. This raises the critical question: how will OpenAI, or any non-advertising-based AI provider, monetise its service at the scale needed?…
…. A similar pattern emerged during the late-1990s dot-com bubble. Telecom operators, despite enormous capital outlays, found their services rapidly commoditised. Usage growth slowed, pricing power collapsed, and the industry could not extract the household revenues required to justify the capex binge. High returns initially attracted more competition, which eventually eroded margins for even the leading “pick-and-shovel” equipment suppliers. Investor belief in unassailable competitive advantages proved illusory. Once reality set in, the bubble burst and triggered a shallow recession…
…SemiAnalysis notes that Google’s TPU infrastructure now rivals NVIDIA’s latest commercially available GPUs – at significantly lower cost. Sensing the threat, NVIDIA has resorted to taking equity stakes in companies that depend on its support (OpenAI among them), effectively subsidising its customer base to stave off competition and preserve margins. This is not a sustainable strategy. Amazon’s upcoming Trainium 3 chip has also narrowed the performance gap and is likely to be cost-competitive upon release.
With credible alternatives emerging, NVIDIA’s 75% gross margins – the foundation of its current valuation – will not hold indefinitely. When investors fully appreciate this, and when that realisation intersects with the economic unsustainability of OpenAI’s model, the conditions for a sharp correction may be in place.
5. AI will kill all the lawyers – Sean Thomas
‘Last week we did an experiment, a kind of simulation. We took a real, recent and important case – a complex civil court appeal which I wrote, and it took me a day and a half. We redacted all identifying details, for anonymity and confidentiality, and we fed the same case to Grok Heavy AI. And then we asked it to do what I did. After some prompting, the end result was…’ He shakes his head. ‘Spectacular. Actually staggering. It did it in 30 seconds, and it was much better than mine. And remember, I am very good at this.’
He sits back, wry yet resigned. ‘It was at the level of a truly great KC. The best possible legal document. And all done in seconds for pennies. How can any of us compete? We can’t.’…
…James believes AI will work its way up the legal hierarchy. First the gruntwork, then the drafting, the citation, the argumentation. Eventually the majority of legal jobs will be replaced. ‘Process lawyers are obviously doomed. AI will handle the most complex probate and conveyancing cases in seconds. The most complicated human skill will be,’ he chuckles, sadly, ‘to scan and digitise paper documents. Barristers will make arguments in courtrooms that are drafted by AI, and then people will wonder why they are paying human barristers £200,000, and they too will disappear.’…
…I ask what he thinks this will do to his colleagues – psychologically, economically, emotionally. ‘At first, they will fight, like radicals. A losing battle. There will be attempts to outlaw the use of AI in various legal areas. But it won’t work, the economics will see to that. So lots of people who make a lot of money will, suddenly, not make that money. God knows what that might do to property prices, to politics, to all of us. Because it won’t just be the law.’
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google), Mastercard, Meta Platforms (parent of Instagram), and Visa. Holdings are subject to change at any time.