What We’re Reading (Week Ending 01 March 2026) - 01 Mar 2026
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 01 March 2026):
1. OpenAI Boost Revenue Forecasts, Predicts $111 Billion More Cash Burn Through 2030 – Sri Muppidi and Stephanie Palazzolo
As revenues climb, rising computing costs will weigh on OpenAI’s bottom line. Last year, the company burned $8 billion in cash, about $500 million less than it forecast in the summer. However, the company expects to burn $25 billion this year and $57 billion next year, about $30 billion more in total than previously predicted.
The company still expects to turn cash flow positive in 2030, when it expects to generate nearly $40 billion in cash…
…OpenAI has told investors the costs of running its AI models, a process known as inference, quadrupled in 2025. As a result, the company’s adjusted gross margin—defined as revenue minus the costs of inference—fell to 33% from 40% the year prior. That’s lower than the gross margin expectations of 46% it had set for itself for 2025. It’s also below half the 70%-plus gross margins of best-in-class software companies.
OpenAI has lowered its gross margin forecasts for the next five years, as its inference costs increase. In that period, the measure will range between 52% and 67%, according to the forecasts; previously the company had expected margins to hit 70% by 2029…
…OpenAI’s revenue more than tripled last year to $13.1 billion, $100 million more than its prior projection.
The new forecasts show OpenAI now expects revenue to rise to $30 billion this year and about $62 billion next year, slightly higher than prior forecasts, with its ChatGPT consumer business the largest driver…
…Last year, OpenAI spent more than $8 billion on the costs of running its AI models for its users, with roughly $4.5 billion on inference for paying users. Its inference costs are expected to rise to roughly $14 billion this year and $26 billion next year, or about $8 billion more in total than was earlier predicted.
The company expects to spend even more on computing costs to train its models. Last year, OpenAI spent $8.3 billion, about a billion less than it expected from its summer forecast. It plans to increase its training costs to $32 billion this year and $65 billion next year, or about $44 billion more than previously expected. These training costs add up, totaling nearly $440 billion through 2030.
2. Software bear case push back (and the real risk that I see) – Drew Cohen
There is a lot of talk of the competitve pressures on SaaS companies, but what about the AI Model businesses?
I think the key thing to remember is that the AI models have their own competition and they are all fighting for market share right now.
Partnering with existing incumbents is an easy way for them to win distribution…
…Users of these SaaS companies are already becoming a source of revenue for the AI companies. This greatly reduces the benefit of creating a product AND business support to specifically go after each vertical…
…I think in some specific cases there is a risk of internal IT departments creating their own software, but I don’t think that will be standard practice. We are already seeing that the AI companies themselves all use a variety of software vendors…
…The transition over the past decade has been for companies to outsource server maintenance to the cloud because they can’t run it as efficiently or introduce new features as quickly. It doesn’t make sense for them to run thin internally just as they often outsource facility maintenance. Unless a business has a benefit for maintaining their own software (which I can’t see), they will want to outsource this…
…I think what AI really does is it allows software companies to enter new verticals adjacent from their’s, which increases competition—I don’t think the competition is going to come directly from the AI companies though.
This is similar to the newspaper industry 20 years ago. The increase in competition didn’t come from “the internet”, but rather what the internet enabled, which was many new ways to get news.
The other risk is pricing pressure and the seat model collapsing. I think as long as the value these companies give their customers is as good, or higher, than before, they will be able to transition this.
3. A Level Headed Look at State of Software – DB
Business software as an industry is small in China and India because labor is a direct competitor to packaged software. Historically in these lower cost labor markets with exceptional technical talent, DIY has been the go to solution. Most western company leaders would be shocked to find that technically savvy Asian tech companies not only are able to in-house their own business applications, but even databases, BI and infrastructure technology.
Is this the direction the world is headed? When the token costs decreases 100x, its tempting to think that the math becomes:
Token to generate code + 2 SWE < annual cost of CRM license
But in reality, the decision is a trade-off of management bandwidth. If a vendor CRM breaks, a customer can expect a SLA for it to be brought back up. If there is a security vulnerability, thats the vendor’s responsibility. In fact, the extreme examples of DYI are only found in the most sophisticated technology companies in Asia. I fully expect AI Labs to experiment with DIY everything but with IT at ~5% of US GDP, I would consider this an edge case…
…I think the barrier to agentic success today is primarily because companies simply dont know how to implement the tools available. This is an area where AI Labs will find collaboration with traditional software businesses to be in their best interest.
This is a long way of saying AI Labs will be selective about which first party applications they themselves will go build. But they have a distinct advantage in that know the billions of questions being prompted each day. Personal health, personal finance, coding, improving writing skills/education, etc. are on the top of that list. And I were to bet, the focus of first party apps will be in these areas…
…Yes agents will be transformational, but i’d bet a good portion of the agents will come from the boring old companies you already know today
Oh wait, there’s more than a business process than code
The reality of a regulated industry is that the value proposition is the sheer volume of dirty work that needs to be done in the background to present a customer with something simple. While it may be true that a payment portal can be generated in hours vs. months now, the moat of a payment company is dealing obtaining bank licenses, putting in place a AML/KYC program with the adequate controls for SARs and fraud detection (just ask CZ at Binance). Same can be said healthcare, telcos and a variety of industries. Not only is there no value for DIY, the risk of doing so far outweighs the reward…
…Several things can be true at once:
- Software companies need to be able to adapt, and some will do it exceptionally well while others won’t
- New companies will be created
- Pricing may be compressed
- Most software companies are too bloated
At the end of the day, what the capital market is doing is applying a higher discount rate to the interim 10 year likelihood of previously forecasted cash flows and the terminal value after those 10 years.
4. Blue Owl Fouls the Nest for AI Financing – Ken Brown
Private market lender Blue Owl is living through the downturn part. The struggles of the firm, which has been a big funder of the AI build-out, could affect the flow of capital into data center developers and cloud providers that need to raise cash…
…Last year, it made at least $5.6 billion of equity investments into data centers and raised $64 billion in debt for those projects, according to internal figures…
…The firm’s effort to manage rising redemptions in one of its smaller funds backfired and appears to have tainted the whole firm. Private lenders live and die on their access to capital and deal flow, both of which are at risk of drying up for Blue Owl.
The firm’s troubles are significant because it sits at the nexus of two important funding sources for the AI build-out—private capital and individual investors. If worries about Blue Owl spread, some projects will be funded at a higher cost—or might not get funded at all…
…Last year, a $1.6 billion private fund that it runs for small investors was facing redemption requests. The firm decided to address the issue by merging the fund with a $16.5 billion publicly traded fund it also runs.
The problem was, the bigger fund was trading at a 20% discount to the value Blue Owl was placing on its assets. The smaller fund, because it wasn’t publicly traded, was priced at the value of its assets. That meant investors in the smaller fund would see the value of their investments fall by 20% when the deal got done. That didn’t make them happy…
…Blue Owl called off the merger, but the damage was done. The deal drew attention to the perennial problem of valuing private assets…
…Fast-forward to last week, when Blue Owl came up with another flawed solution to its problems. It would sell $1.4 billion of assets to three big institutional investors and to an insurance company that it has a deep financial relationship with. That money would fund investor redemptions.
One problem is that when a fund with illiquid holdings sells assets, investors assume it is selling the highest-quality and most liquid ones, meaning what’s left will be harder to sell. That makes further redemptions tougher and gives investors a signal to get out…
…Another issue: Blue Owl selling assets into the insurer, Kuvare Holdings, could indicate that there were no other buyers and that it stuck Kuvare with bad assets…
…That became clear on Friday, when Business Insider reported that Blue Owl had trouble raising funds for a $4 billion data center in Pennsylvania.
The project is relatively speculative as these things go, so there could be other reasons why Blue Owl couldn’t raise the cash. The firm said it has considered outside funding and ultimately didn’t need it.
5. History Rhymes: Large Language Models Off to a Bad Start? – Michael Burry
While mining old newspapers on a quiet Saturday – a hobby of mine – I came upon a story from June 19, 1880, that I found relevant to our modern anxieties about AI.
It is the story of Melville Ballard, who, as a child without language, spied with his eyes a tree stump and asked himself if the first man rose out of it.
This 144-year-old case study – presented at the Smithsonian Institute no less – provides a potentially devastating critique of today’s Large Language Models and the spending behind them. With a simple human story, it boldly announced that complex thought exists in the silence before words…
…There are actually two stories of interest in that old newspaper. Let’s start with the one in the middle. This is Page 3 of this edition of the New York Times, and I see a story called Thought without Language…
…The story concerns one Professor Samuel Porter, of the National Deaf-Mute College at Kendall Green, who presented a paper at the Smithsonian Institution. The paper title, “Is There Thought Without Language? Case of a Deaf Mute.”
At first discussion of deaf-mutes and children having no form of mental action that distinguishes them from brutes, well, understanding has changed a lot, and I was ready to dismiss.
The case study is of a teacher at the Columbia Institute for the Instruction of the Deaf and Dumb. This particular teacher, Melville Ballard, is also a deaf mute and a graduate of the National Deaf Mute College.
Mr. Ballard says that in his infancy he communicated with his parents and brothers by natural signs or pantomime. His father, believing that observation would help to develop his faculties, frequently took him riding.
He continues that it was during a ride two or three years before he was initiated into the rudiments of written language that he began to ask himself the question, “How came the world into being?” and his curiosity was awakened as to what was the origin of human life, its first appearance, the cause of the existence of earth, sun, moon, and stars. At one time, seeing a large stump, he asked himself the question, “Is it possible that the first man that ever came into the world rose out of that stump? But that stump is only a remnant of a once magnificent tree; and how came that tree? Why, it came only by beginning to grow out of the ground, just like these little trees now coming up;” and he dismissed from his mind as absurd the connection between the origin of man and a decaying old stump… …One of the presentation’s attendees notes, significantly, how Ballard’s eyes conveyed meaning perfectly, without misunderstanding, above all else.
One of the most interesting features of this meeting was Mr. Ballard, by signs, explaining how his mother informed him that he was going a long way to school, where he would read from a book, write and fold a letter, and send it to her, &c., and also, by pantomime, reciting how a hunter, after killing a squirrel, accidentally shot and killed himself. Mr. Ballard’s signs and gestures, with the expression of the eyes and face, conveyed his meaning perfectly to the audience, and, in the words of a member, the expression of the eye was language which could not be misunderstood.
Let us consider these two statements:
- “That by which we understand all things must be essentially superior to anything else that is understood by it.”
- “…in the words of a member, the expression of the eye was language which could not be misunderstood.”
In sum,
- Language without the Capacity for Reason fails at Understanding
- Only with Capacity for Reason does Language unlock Understanding.
- Understanding, fully realized, transcends Language.
By putting language first, LLMs build a primitive form of reason purely through logical inference, but this form of reason has been shown flawed and prone to hallucination due to limitations at the many ragged edges of knowledge.
The capacity for reason never existed. Therefore, language cannot scale through reason to understanding.
The professor suggests, in his work with deaf and mute people, he has discovered that a capacity for true reason must exist first, before language, so language can unlock understanding — the product of that capacity for true reason and language.
“The expression of the eye is the language which cannot be misunderstood.”
To wit, expression of the eye is what flawless understanding looks like, without the need for language.
Large Language Models, by putting language first, before the capacity for true reason, can never attain understanding…
…The original approach to AI was to generate a true capacity for reason first, but it was never realized, and the field pivoted to language first because it was easier.
This ‘bad start’ has led to a “parameter trap,” where brute-force language processing powered by zillions of power-hungry chips has become an incredibly ironic bottleneck.
As my conversation with Klarna’s Sebastian Siemiatkowski highlighted, the future lies in compression—leveraging ‘System 2’ reasoning-first to work off the redundancy of information and the relatively finite query sets produced by humans to drastically reduce compute needs.
This new line rejects singularity through language models talking to each other in an infinite mirror as a directionless waste of resources made impossible by lack of a basis in economic realities.
While frontiers like Google’s AlphaGeometry and Meta’s Coconut are finally moving toward this ‘reason-first’ architecture, they are essentially rediscovering what was presented at the Smithsonian 144 years ago: that language is the output of understanding, not the engine of reason…
…I mentioned there was another story of interest, and it is on the same page. More relevant to the first story than anyone in 1880s may have guessed it would be in 2026.
This article is “San Francisco’s Wealth, A Population of Bonanza Speculators.”
This story was written June 1 in San Francisco, and only published in the New York Times on June 19th…
…California was pre-eminently the paradise of the man of small capital. To satisfy the craving for speculation, the peculiar open-board system was adopted, whereby the man who had $50 to invest, by purchasing a share therein, could acquire a small interest in a mine at a dollar a share, or two shares at 50 cents, or any number at varying prices.
A “boom” existed here in certain stocks, seemed not to reach beyond the desire to do so “just once more” it seemed to excite the same gambling fever in San Francisco, and for lines lost by the bonanza firm was eagerly grasped by the people of San Francisco, and of the “boom” having been accompanied and by speculative losses on the part of the people, the “boom” disappeared and stocks fell to their normal condition.
The story closing hits hard for reality today.
The People of San Francisco seem to have become educated to the idea that they must leap into fortune at once, and their big bonanza at Virginia City having failed, they do appear to be willing to exert themselves to hunt for wealth in other directions, such as the development of manufacturing, trade, and agricultural interests. Almost the entire population is imbued with the passion for speculation, and if a new bonanza as big as the one in Nevada were to be discovered either there or near here, stocks would mount again to absurd figures, and San Francisco would again pass through the period of flush times to again suffer as she has during the past two years.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google) and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.