What We’re Reading (Week Ending 15 February 2026) - 15 Feb 2026
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 15 February 2026):
1. Before the market declared SaaS dead, it should have tested Anthropic’s new tools first. We did – Jim Wagner
Lawyers are not early adopters by temperament, and they don’t grade on a curve. A tool that reviews a contract and misses a material protection doesn’t get classified as “promising but incomplete.” It risks being shelved. Permanently. The standard is binary: either the tool is reliable enough that I can build a workflow around it, or it isn’t. There is no middle ground where a legal team says “it caught seven out of ten critical issues, so let’s use it for now.”
This is especially true in regulated environments — clinical trials, financial services, healthcare — where a missed clause isn’t an aesthetic problem. It’s a liability exposure, a regulatory finding, or a damaged institutional relationship. The question isn’t whether AI can review contracts. It can. The question is whether it can do so at the threshold required for a professional to rely on it…
…A clinical trial agreement is a different animal. It’s longer, more technically complex, and touches regulatory frameworks — HIPAA, FDA reporting obligations, IRB oversight, 21 CFR Part 54 financial disclosure — that require genuine domain expertise. The provisions interact with each other in ways that matter: a change to monitoring visit procedures can impact confidentiality obligations; a publication review period needs to account for patent deferral timelines; a subject injury provision needs to include a safe harbor for protocol deviations made to protect patient safety.
Once again, we gave Claude the identical playbook TCN uses — one specifically structured for AI consumption, with clear logic and well-defined positions — and ran both systems against the same clinical trial agreement.
The gap didn’t narrow. It widened.
TCN made 101 insertions of required protective language and 62 targeted deletions — 163 substantive changes in total. Claude made 7 insertions and 4 deletions. Tellingly, Claude’s changes were largely find-and-replace-level revisions: substituting “immediately” with “promptly,” replacing “sole” with “reasonable,” increasing an insurance figure, and adding pandemic language to a force majeure clause. These are real edits. They are also the edits a first-year associate would make in the first twenty minutes of review…
…These results are not a reflection of Claude’s quality as a language model. Claude is an extraordinarily capable general-purpose AI, and we use it daily in our own work. The gap is a reflection of architecture and ambition.
Claude’s legal plugin reads an entire agreement and an entire playbook, then attempts to produce all of its analysis and redlines in a single pass. This is analogous to asking a lawyer to read a thirty-page contract and a fifty-topic playbook simultaneously, then dictate every markup from memory in one sitting. Issues inevitably get lost — not because the lawyer lacks ability, but because the task exceeds what any single-pass process can reliably accomplish.
A purpose-built system works differently. Each playbook position is matched against the agreement independently and analyzed in a dedicated step with only the relevant clause text and guidance in front of it. Nothing competes for attention. Every position in the playbook is programmatically guaranteed to be evaluated. The system doesn’t need to “remember” to check a provision — it cannot skip one.
This also explains why the gap widened on the longer, more complex clinical trial agreement. The more provisions, the more playbook positions, and the more regulatory context a single-pass system must hold in working memory simultaneously, the more it drops. A purpose-built pipeline scales linearly. A single-pass approach degrades…
…The stock market’s reaction treated Anthropic’s announcement as if a general-purpose model with a vertical plugin is architecturally equivalent to purpose-built vertical software. It isn’t — and the evidence is now available for anyone willing to run an actual test.
But there’s a more fundamental point. Nothing Anthropic announced addresses multi-document congruence, multi-party collaboration, or institutional workflow orchestration. A Claude user reviewing a clinical trial agreement operates in a single chat window with a single document. The protocol, consent form, budget, and coverage analysis — all of which must be internally consistent with the contract — exist nowhere in that workflow. Imagine five users with five separate skills in five disconnected chat windows, each trying to keep their work coordinated, cross-checked, and accurate. There is no shared data model. No audit trail. No collaboration layer. No mechanism to ensure that a change to the protocol ripples correctly through the budget, the consent form, and the contract.
The natural counterargument is that agentic AI frameworks — autonomous agents that chain tasks, manage state, and coordinate across documents — will close this gap. They will have an impact, we use them ourselves and we take that seriously. But agentic frameworks don’t arrive pre-built with plug-and-play domain solutions. They are tools, not answers. An agent orchestrating clinical trial study startup still needs deep context understanding of the subject matter, the stakeholder requirements, and the interconnectedness of every document and every party involved. It needs to know that a change to a protocol’s schedule of events must ripple through the budget, the consent form, and the coverage analysis — and it needs to know how. That’s not something you install. It’s something you build — substantial work that relies on deep expertise with respect to the subject matter and AI implementation, refined across thousands of agreements. The same architectural principles that separate a plugin from a platform will separate a generic agent from a team of purpose-built ones.
2. As AI enters the operating room, reports arise of botched surgeries and misidentified body parts – Jaimi Dowdell, Steve Stecklow, Chad Terhune and Rachael Levy
In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.
The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations…
…In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough’s carotid artery allegedly “blew.” Blood “was spraying all over” – even landing on an Acclarent representative who was observing the surgery, according to a lawsuit Fernihough filed in U.S. District Court in Fort Worth against Acclarent and several manufacturers. One of Fernihough’s carotid arteries was damaged. She suffered a stroke the day of the surgery, according to her suit.
Acclarent “knew or should have known that the purported artificial intelligence caused or exacerbated the tendency of the integrated navigation system product to be inconsistent, inaccurate, and unreliable,” the suit alleges.
Acclarent has denied the allegations in both suits, which are ongoing, according to court filings. The company says it did not design or manufacture the TruDi system but only distributed it, according to court filings. Acclarent’s owner, Integra LifeSciences, told Reuters there’s no evidence of a link between the AI technology and any alleged injuries…
…Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn’t comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.
One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images.
“Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts,” stated the report, which does not say that any patient was harmed. Sonio Detect is owned by Samsung Medison, a unit of Samsung Electronics. Samsung Medison said the FDA report about Sonio Detect “does not indicate any safety issue, nor has the FDA requested any action from Sonio.”…
…The FDA requires clinical trials for new drugs, but medical devices face different screening. Most AI-enabled devices coming to market aren’t required to be tested on patients, according to FDA rules. Instead, makers satisfy FDA rules by citing previously authorized devices that had no AI-related capabilities, says Dr. Alexander Everhart, an instructor at Washington University’s medical school in St. Louis and an expert on medical device regulation.
Positioning new devices as updates on existing ones is a long-established practice, but Everhart says AI brings new uncertainty to the status quo.
“I think the FDA’s traditional approach to regulating medical devices is not up to the task of ensuring AI-enabled technologies are safe and effective,” Everhart told Reuters. “We’re relying on manufacturers to do a good job at putting products out. I don’t know what’s in place at the FDA represents meaningful guardrails.”
3. Clouded Judgement 2.13.26 – Build vs Buy – Jamin Ball
The cost of creating software is going to zero. The risk isn’t that someone will vibe code a internal CRM replacement…The risk is that 10 companies could now create a new CRM, from the ground up, built for a new end user in mind (agents vs people), with a business model for the AI world (consumption / usage vs seats), and now all of a sudden the market is flooded with offerings and the legacy space commoditizes.
This, to me, is the real risk. Software broadly commoditizes, with a new crop of software / value emerging. A big constraint to the development of software is engineering resources. Before the cloud, a constraint was how quickly could you stand up racks of servers to support user growth. In the cloud era that was commoditized, and engineering resources became the constraining factor (how quickly could you develop software). With AI, that constraining resource (engineering velocity) is going away.
So what happens from here…The world is about to be flooded with software. For companies that can’t innovate and capture this next S-Curve of innovation, they will slowly fade to irrelevance. The will be valued as companies in a post-growth industry, and receive a post-growth valuation multiple (see ya revenue multiples…). For those who can, a new vector of growth lays ahead of them…
…If we bring this back to the “is software dead” conversation, many are pointing to the recent Q4 earnings reports (we’re in the middle of earnings season right now) as “evidence” that AI isn’t eating software. For the most part, earnings have been good! Retention figures don’t seem to show any sign of cracking. However, I found an awesome graphic floating around X this week (copied below). It showed an index of newspaper companies stock performance and earnings over time (starting in 2002). What you’ll see, is the voting machine of the market saw the disruption coming from the internet, and started to discount the newspaper stocks right away. From 2002 to 2009 those stocks basically went down in a straight line. However, if you look at earnings estimates for that same set of companies, they actually grew for about 5 straight years! During that time, the stocks continued to drop. It wasn’t until 2007 when the earnings really started to get disrupted. Earnings then fell off a cliff. All of this to say – don’t take too much comfort in the short term quarterly results 🙂 Disruption generally takes a bit longer
4. Earnings Drive Stocks – Matt Cerminaro
Below I’m showing you the net income share vs the market cap share of each sector within the S&P 500 since 2005…
…Each color represents a sector. Net income share is on the left and market cap share is on the right.
Let’s start on the left.
See how the Technology Sector’s net income share has grown over time? It’s the light blue shade at the bottom of the chart.
Now look at the chart on the right.
That same light blue shade rising over time is the market cap share of Tech growing concurrently with the net income share.
Energy, the orange shade, used to command a larger share of the S&P 500’s overall net income, but it has shrunk over time.
Its market cap share has done the same.
5. AI and the Economics of the Human Touch – Adam Ozimek
The player piano, or pianola, was invented by Edwin Votey in 1895. At first it was a stand-alone machine that would be pushed up against an existing piano, like the one shown below.
Within a few years, player pianos could be built into the pianos themselves. The machines “read” music that was encoded onto rolls of paper. The notes were represented as holes in the paper that directed pneumatic airflow, which then pushed down the levers that depressed the piano keys.
The only role for humans to play in the functioning of a player piano was to pump the pneumatic foot pedals to keep the piano playing. No need for a skilled human piano player.
And yet, despite the technology to fully automate the job having been invented more than a century ago, people still make a living playing the piano today.
The job is not just limited to piano players performing in ticketed concert events, which of course are quite common. Hotels, bars, and restaurants continue to hire live piano players to provide background music as if it was 1894, the year before the invention of the pianola, which itself is hardly ever used anymore.
Listeners simply prefer music from a piano player rather than a player piano…
…In 2007, a restaurant entrepreneur named Jack Baum was teaching an executive MBA program at Southern Methodist University. He challenged the class to come up with a way to help restaurant customers pay their bill faster than simply waiting for the server to bring the check. Three students arrived at such a compelling answer that the four of them turned it into a company called Ziosk.
Ziosk’s tabletop ordering system provides customers with a tablet that allows them to order, pay, play games, enter coupons, and much else. Thus was born the ability to automate away the job of waiter.
The tablet debuted at 125 Chili’s locations in 2013, and today they are in thousands of restaurants. Ordering devices like this are much more commonplace today, including QR codes that allow customers to order from their own smartphones.
On paper, the job of waiter has been fully automated for over a decade. And yet, today there remain 1.9 million waiters across the US. It’s true that this number has dipped recently, and is slightly below the historical peak. Under the pressure of automation, the BLS forecasts that it will further decline within the next decade… by 1 percent. Is that the worst that full automation can do to this job?…
…Consider first that even some restaurants that have implemented automation nevertheless have wait staff. At Olive Garden, you can order and pay from a provided tablet at any point, but you still have a waiter who greets you, offers to take your order if you don’t want to use the tablet, and checks in on you throughout the meal. If you wait long enough, they will even bring the check. That is a strong signal that the waiter is adding value above and beyond automation…
…If productivity surges from AI, the United States will become a far richer country per capita. It’s not clear whether this will translate into much faster income growth for the median workers. In recent decades, after all, median wage growth has lagged mean wage growth — likely reflecting the trend that overall productivity growth has exceeded the growth in productivity of the typical worker.
Median wage growth has been positive, so it is not true that the typical workers fails to benefit from faster productivity growth. But the benefit for the typical worker is not proportional to the economy-wide growth in productivity, raising the spectre that future productivity growth could be even less proportional.
The result would be rising income inequality — which can straightforwardly be offset with policies that redistribute income. Redistribution might be expensive, but the same AI-driven economic growth that generated the rising inequality would also create the fiscal space needed to offset it. In short, spreading income around is a political challenge, not a policy or economic challenge.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.