What We’re Reading (Week Ending 22 February 2026) - 22 Feb 2026
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 22 February 2026):
1. Google Is Exploring Ways to Use Its Financial Might to Take On Nvidia – Raffaele Huang, Kate Clark, and Berber Jin
The company’s chips are gaining wider adoption for AI workloads, including with startups such as Anthropic, but Google is dealing with myriad challenges as it seeks to grow. The issues include bottlenecks at manufacturing partners and limited interest from cloud-computing rivals that are among the largest buyers of Nvidia processors, according to people familiar with the matter.
To expand its potential market, Google is increasing its financial support to a network of data-center partners that can provide computing power to a broader swath of customers, people familiar with its plans said.
The company is in talks to invest around $100 million in cloud-computing startup Fluidstack, part of a deal that values it at around $7.5 billion, people familiar with the discussions said. Fluidstack is one of a growing number of so-called “neocloud” companies that offer computing services to AI companies and others…
…Google has also held discussions about expanding its financial commitments to other data-center partners that could lead to additional TPU demand, people familiar with the talks said. Google has backstopped financing for projects involving Hut 8, Cipher Mining and TeraWulf, which are former crypto-mining companies that are now developing data centers. Cipher Mining declined to comment. Hut8 and TeraWulf didn’t respond to requests for comment.
Some managers at Google’s cloud-computing division recently refreshed a longstanding internal debate about restructuring the TPU team into a stand-alone unit, people familiar with those discussions said. Such a plan could potentially allow Google to expand its opportunities to invest, including with outside capital.
One challenge for any potential stand-alone unit is that Google’s cloud business relies heavily on Nvidia chips, some of the people said…
…In 2018, Google started selling access to TPUs through its cloud services. The company has traditionally signed up TPU users through its cloud-computing unit, but it is also selling the TPU chips directly to external customers, according to industry research group SemiAnalysis…
…However, interest from major cloud-service providers appears to be tepid, partly because they consider Google a competitor, according to industry participants. Amazon Web Services, Amazon.com’s cloud unit, has also developed its own chips for AI.
2. 10 Years Building Vertical Software: My Perspective on the Selloff – Nicolas Bustamante
Vertical software is software built for a specific industry. Bloomberg for finance. LexisNexis for legal. Epic for healthcare. Procore for construction. Veeva for life sciences, etc.
These companies share a defining characteristic: they charge a lot and customers rarely leave. FactSet charges $15,000+ per user per year. Bloomberg Terminal costs $25,000 per seat. LexisNexis charges law firms thousands per month. And retention rates hover around 95%.
I would say that there are ten distinct moats. LLMs are attacking some of them while leaving others intact…
…Knowledge workers pay to not relearn a workflow they’ve spent a decade mastering. The interface IS a big part of the value prop…
…LLMs collapse all proprietary interfaces into one Chat…
…Vertical software encodes how an industry actually works. A legal research platform doesn’t just store case law. It encodes citational networks, Shepardize signals, headnote taxonomies, and the specific way a litigation associate builds a brief.
This business logic took years to build. It reflects thousands of conversations with domain experts. When I built Doctrine, the hardest part wasn’t the technology. It was understanding how lawyers actually work: how they research case law, how they draft documents, how they build a litigation strategy from intake to trial. Encoding that understanding into working software was a huge part of what made vertical software valuable—and defensible.
LLMs turn all of this into a markdown file…
…A massive portion of vertical software’s value proposition was making hard-to-access data easy to query. FactSet makes SEC filings searchable. LexisNexis makes case law searchable. These are genuine services. SEC filings are technically public, but try reading a 200-page 10-K in raw HTML. The structure is inconsistent across companies. The accounting terminology is dense. Extracting the actual numbers you need requires parsing nested tables, following footnote references, reconciling restated figures.
Before LLMs, accessing this public data required specialized software and significant engineering scaffolding. Companies like FactSet built thousands of parsers, one for each filing type, each company’s idiosyncratic formatting. Armies of engineers maintained these parsers as formats changed. The code to turn a raw SEC filing into queryable data was a genuine competitive advantage…
…LLMs make this trivial. Frontier models already know how to parse SEC filings from their training data. They understand the structure of a 10-K, where to find revenue recognition policies, how to reconcile GAAP and non-GAAP figures. You don’t need to build a parser. The model IS the parser. Feed it a 10-K and it can answer any question about it. Feed it the entire corpus of federal case law and it can find relevant precedent…
…At Doctrine, hiring was brutal. We didn’t just need good engineers. We needed engineers who could understand legal reasoning: how precedent works, how jurisdictions interact, what grounds for appeal to the supreme court look like. These people barely existed. So we built our own. Every week, we held internal lectures where lawyers taught engineers how the legal system actually worked. It took months before a new engineer was productive. The talent scarcity was a genuine barrier, not just for us, but for anyone trying to compete with us.
At Fintool, we don’t do any of that. Our domain experts (portfolio managers, analysts) write their methodology directly into markdown skill files. They don’t need to learn Python. They don’t need to understand APIs. They write in plain English what a good DCF analysis looks like, and the LLM executes it. The engineering is handled by the model. The domain expertise, which was always the abundant resource, can now become software directly without the engineering bottleneck.
LLMs make the engineering trivially accessible, which means the scarce resource (domain expertise) is suddenly abundant in its ability to become software. This is why the barrier to entry collapses so dramatically…
…Vertical software companies expand by bundling adjacent capabilities. Bloomberg started with market data, then added messaging, news, analytics, trading, and compliance. Each new module increases switching costs because customers now depend on the entire ecosystem, not just one product. S&P Global’s acquisition of IHS Markit for $44B was exactly this strategy. The bundle becomes the moat…
…LLM agents break the bundling moat because the agent IS the bundle…
…Some vertical software companies own or license data that doesn’t exist anywhere else. Bloomberg collects real-time pricing data from trading desks worldwide. S&P Global owns credit ratings and proprietary analytics. Dun & Bradstreet maintains business credit files on 500M+ entities. This data was collected over decades, often through exclusive relationships. You can’t just scrape it. You can’t recreate it.
If your data genuinely cannot be replicated, LLMs make it MORE valuable, not less…
…The test is simple: Can this data be obtained, licensed, or synthesized by someone else? If no, the moat holds. If yes, you’re in trouble…
…The irony is that LLMs accelerate the bifurcation. Companies with proprietary data win bigger. Companies without it lose everything…
…HIPAA doesn’t care about LLMs. FDA certification doesn’t get easier because GPT-5 exists. SOX compliance requirements don’t change because Anthropic released a new plugin…
…In fact, regulatory requirements may slow LLM adoption in exactly the verticals where compliance lock-in is strongest. A hospital can’t replace Epic with an LLM agent because the LLM agent isn’t HIPAA certified, doesn’t have the required audit trails, and hasn’t been validated by the FDA for clinical decision support…
…Some vertical software becomes more valuable as more industry participants use it. Bloomberg’s messaging function (IB chat) is the de facto communication layer for Wall Street. If every counterparty uses Bloomberg, you have to use Bloomberg. Not because of the data. Because of the network.
LLMs don’t break network effects. If anything, they might make communication networks more valuable. The information flowing through these networks becomes training data, context, signal…
…Some vertical software sits directly in the money flow. Payment processing for restaurants. Loan origination for banks. Claims processing for insurance companies. When you’re embedded in the transaction, switching means interrupting revenue. Nobody does that voluntarily.
If your software processes payments, originates loans, or settles trades, an LLM doesn’t disintermediate you. It might sit on top of you as a better interface, but the rails themselves remain essential…
…LLMs don’t directly threaten system of record status today. But agents are quietly building their own.
Here’s what’s happening: AI agents don’t just query existing systems. They read your SharePoint, your Outlook, your Slack. They collect data on the user. They write detailed memory files that persist across sessions. And when they perform key actions, they store that context. Over time, the agent accumulates a richer, more complete picture of a user’s work than any single system of record.
The agent’s memory becomes the new source of truth. Not because anyone planned it, but because the agent is the one layer that sees everything. Salesforce sees your CRM data. Outlook sees your emails. SharePoint sees your documents. The agent sees all three, and remembers…
…The real threat isn’t the LLM itself. It’s a pincer movement that vertical software incumbents didn’t see coming.
From below, hundreds of AI-native startups are entering every vertical. When building a credible financial data product required 200 engineers and $50M in data licensing, markets naturally consolidated to 3-4 players. When it requires 10 engineers and frontier model APIs, the market fragments violently. Competition goes from 3 to 300…
…From above, horizontal platforms are going deep into vertical territory for the first time. Microsoft Copilot inside Excel now does AI-powered DCF modeling and financial statement parsing. Copilot inside Word does contract review and case law research. The horizontal tool becomes vertical through AI, not through engineering…
…For any vertical software company, ask three questions:
1. Is the data proprietary? If yes, the moat holds. If no, the accessibility layer is collapsing.
2. Is there regulatory lock-in? If yes, LLMs don’t change the switching cost equation. If no, switching costs are primarily interface-driven and dissolving.
3. Is the software embedded in the transaction? If yes, LLMs sit on top of you, not instead of you. If no, you’re replaceable.
Zero “yes” answers: high risk. One: medium risk. Two or three: you’re probably fine.
3. Rebuttal to Nicolas – Unemployed Capital Allocator
I used to work for a relatively large long only shop.
We switched from Factset to Bloomberg + CapIQ.
We spent approximately 0 seconds discussing the UI change…
…Where does learned UI really matter? Tools with tons of degree of freedom, and where action per minute actually does matter. Professional workflow tools. Modelling software. Video editing software. Ones where knowing the shortcut is a decent part of the job.
A text box isn’t replacing this.
The idea is quite alluring – to those that don’t know the UI. Look! You can just tell it to do something and … it does it!
Until you need to do it multiple times. Then you start to go – man, I wish there was a quick way for me to send this prompt, to do this exact thing I want it to do. Oh and remember all the info I’m supposed to provide so that I get back exactly what I want. Maybe I can map it to a button and a keyboard shortcu…
Oh wait – that’s UI.
Text is amazing because it’s universal. Text is also absolutely horrible because it has infinite degree of freedom, and introduces another level of abstraction. This is not what you want when you need to do a lot of specific things, quickly.
Oh and btw – these ‘legacy providers’ with pesky, hard to learn UI and custom codes? They can very easily tack on a text box to help new users – or power users that are doing a new workflow. While providing the flexibility of getting shit done when you need to…
…There’s zero chance that a complex web of markdown files is going to replace business logic entirely.
The reason is quite simple. You do not want to introduce a layer of unpredictability and degree of freedom to your core business logic. This is stuff of nightmares even at simple levels. When you introduce complexity and interdependency, it’s straight line to system failure and bankruptcy…
…I am not sure why an agent would choose one vendor for alerts functionality and another for watchlist and 3rd for news – or how it would even go about doing this – or why this would save money. Maybe these will all be new providers? Maybe the model will just vibe code point solutions as needed? Maybe there will be perfect interoperability between all the modules? Or maybe LLM will learn to translate them all perfectly? I don’t know…
…SoRs exist as the core, singular database of truth that the whole org agrees is the truth.
Why are we splitting this across thousands of markdown files???? With no way to audit, reconcile, track … basically all the things we need a SoR to do????
4. The Golden Age of Software – Unemployed Capital Allocator
There’s a classic CS exercise: write instructions for making a PB&J sandwich, then watch someone follow them literally. “Put peanut butter on the bread” — and they place the sealed jar on top of the loaf. The lesson: every instruction you write is full of assumptions the other person doesn’t share.
This is what’s happening every time you prompt an LLM. You say “build me a user dashboard” and the model fills in hundreds of implicit assumptions about the world that you never specified. And here’s the thing: it’s really good at this. Good enough that the code runs, the demo looks great, and you feel like a genius. But those decisions are educated guesses. The model built you a PB&J. It doesn’t know that you’re allergic to peanuts.
When you’re vibe coding a demo or a small CRUD app, none of this matters. You’re on the happy path, everything works, nobody cares about code quality. It’s beautiful. But enterprise software in the real world is about every path but the happy one — a world where failure on one of those paths means losses that dwarf annual costs…
…So what happens when the market gets carpet-bombed with new products and DIY builds — in a market where customers ask “who else uses this?” as a standard question?
Decision fatigue. Procurement asking, “Who even are these guys?”
In a world where production becomes free, the existing distribution relationship becomes the chokehold. And this is what every incumbent has. Yes — this is the tired old distribution vs. product debate. But I’d argue the current moment makes it more true than it’s ever been, precisely because the supply explosion makes trust, brand, and existing relationships much more valuable…
…While existing relationships holds the line, incumbents also get to play offence.
Your development team now has a new source of leverage. Properly harnessed, everything from research to product creation to debugging and maintenance gets faster. “Where is this logic?” stops being a week-long archaeology expedition. You simply do more with the same team.
In addition, the value ceiling of software today is dramatically higher than it was two years ago. Stuff that was “too expensive,” “too custom,” or “not worth the engineering time” suddenly becomes shippable. LLMs and VLMs have unlocked capabilities that were science projects two years ago…
…What about agents taking over corporate workflows and becoming a key user of software products? Doesn’t that leave a lot of products open to disintermediation?
I have three pushbacks.
First — a lot of workflow shifting to agents is not the same as all workflow shifting to agents. The gap between those two things is enormous, and the bear case tends to hand-wave right past it.
Second — agentic workflow is still a pipeline. And when you have a working production pipeline, you don’t rip out a key component to save a couple thousand bucks. But this isn’t just an inertia argument — it’s a structural one. The agent replacing that component needs to match the accumulated production knowledge baked into the existing solution: every edge case, every integration quirk, every failure mode discovered over years of real-world use. That’s not a matter of writing code. It’s a matter of replicating hard-won context that doesn’t exist in any training set. The idea that agents will vibe code an alternative for a critical piece of a high-speed production system isn’t just unlikely because of switching costs — it’s unlikely because the agent literally doesn’t know what it doesn’t know.
Third — non-humans using software is not a new thing. There’s a whole class of software that is mostly consumed by other software, and these still make amazing businesses. The identity of the user changing from human to agent doesn’t inherently destroy the value of the product.
5. How will OpenAI compete? – Ben Evans
“Jakub and Mark set the research direction for the long run. Then after months of work, something incredible emerges and I get a researcher pinging me saying: “I have something pretty cool. How are you going to use it in chat? How are you going to use it for our enterprise products?”
– Fidji Simo, head of Product at OpenAI, 2026
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it”
– Steve Jobs, 1997
It seems to me that OpenAI has four fundamental strategic questions.
First, the business as we see it today doesn’t have a strong, clear competitive lead. It doesn’t have a unique technology or product. The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable. Nor does OpenAI have consumer products on top of the models themselves that have product-market fit.
Second, the experience, product, value capture and strategic leverage in AI will all change an enormous amount in the next couple of years as the market develops. Big aggressive incumbents and thousands of entrepreneurs are trying to create new features, experiences and business models, and in the process try to turn foundation models themselves into commodity infrastructure sold at marginal cost. Having kicked off the LLM boom, OpenAI now has to invent a whole other set of new things as well, or at least fend off, co-opt and absorb the thousands of other people who are trying to do that.
Third, while much of this applies to everyone else in the field as well, OpenAI, like Anthropic, has to ‘cross the chasm’ across the ‘messy middle’ (insert your favourite startup book title here) without existing products that can act as distribution and make all of this a feature, and to compete in one of the most capital-intensive industries in history without cashflows from existing businesses to lean on. Of course, companies that do have all of that need to be able to disrupt themselves, but we’re well past the point that people said Google couldn’t do AI.
The fourth problem is expressed in the quotes I used above…
…There are something like half a dozen organisations that are currently shipping competitive frontier models, all with pretty-much equivalent capabilities. Every few weeks they leapfrog each other…
…There is no equivalent of the network effects seen at everything from Windows to Google Search to iOS to Instagram, where market share was self-reinforcing and no amount of money and effort was enough for someone else to to break in or catch up.
This could change if there was a breakthrough that enabled a network effect, most obviously continuous learning, but we can’t plan for that happening…
…The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day. The data that OpenAI released in its ‘2025 wrapped’ promotion tells us that 80% of users sent less than 1,000 ‘messages’ in 2025. We don’t know how that changed in the year (it probably grew) but at face value that’s an average of less than three prompts per day, and many fewer individual chats. Usage is a mile wide but an inch deep…
…OpenAI’s ad project is partly just about covering the cost of serving the 90% or more of users who don’t pay (and capturing an early lead with advertisers and early learning in how this might work), but more strategically, it’s also about making it possible to give those users the latest and most powerful (i.e. expensive) models, in the hope that this will deepen their engagement. Fidji Simo says here that “diffusion and scale is the most important thing.” That might work (though it also might drive them to pay, or drive them to Gemini). But it’s not self-evident that if someone can’t think of anything to do with ChatGPT today or this week, that will change if you give them a better model. It might, but it’s at least equally likely that they’re stuck on the blank screen problem, or that the chatbot itself just isn’t the right product and experience for their use-cases no matter how good the model is.
In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness…
…So: you don’t know how you can make your core technology better than anyone else’s. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven’t been invented yet, and you can’t invent all of those yourself. What do you do?
For a lot of last year, it felt like OpenAI’s answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I’ve forgotten! And, of course, trillions of dollars of capex announcements, or at least capex aspirations…
…As we all know, OpenAI has been running around trying to join the club, claiming a few months ago to have $1.4tr and 30 gigawatts of compute commitment for the future (with no timeline), while it reported 1.9 gigawatts in use at the end of 2025…
…But, again, does that get you anything more than a seat at that table? TSMC isn’t just an oligopolist – it has a de facto monopoly on cutting edge chips – but that gives it little to no leverage or value-capture further up the stack. People built Windows apps, web services and iPhone apps – they don’t build TSMC apps or Intel apps.
Developers had to build for Windows because it had almost all the users, and users had to buy Windows PCs because it had almost all the developers (a network effect!). But if you invent a brilliant new app or product or service using generative AI, or add it as a feature to an existing product, you use the APIs to call a foundation model running in the cloud and the users don’t know or care what model you used. No-one using Snap cares if it runs on AWS or GCP. When you buy an enterprise SaaS product you don’t care if it uses AWS or Azure. And if I do a Google Search and the first match is a product that’s running on Google Cloud, I would never know…
…As I’ve written this essay, I’ve returned again and again to terms like platform, ecosystem, leverage and network effect. These terms get used a lot in tech, but they have pretty vague meanings. Google Cloud, Apple’s App Store, Amazon Marketplace, and even TikTok are all ‘platforms’ but they’re all very different.
Maybe the word I’m really looking for is power. When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don’t want to do, and that’s really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does?…
…Foundation models are certainly multipliers: massive amounts of new stuff will be built with them. But do you have a reason why everyone has to use your thing, even though your competitors have built the same thing? And are there reasons why your thing will always be better than the competition no matter how much money and effort they throw at it? That’s how the entire consumer tech industry has worked for all of our lives. If not, then the only thing you have is execution, every single day. Executing better than everyone else is certainly an aspiration, and some companies have managed it over extended periods and even persuaded themselves that they’ve institutionalised this, but it’s not a strategy.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google and Google Cloud), Amazon (parent of AWS), Apple, Meta Platforms, Microsoft (parent of Azure), Salesforce, and TSMC. Holdings are subject to change at any time.