What We’re Reading (Week Ending 11 May 2025)

What We’re Reading (Week Ending 11 May 2025) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 11 May 2025):

1. AGI is not a milestone – Sayash Kapoor and Arvind Narayanan

Many people have the intuition that AGI will have these properties. It will be so powerful and humanlike that it will be obvious when we’ve built it. And it will immediately bring massive benefits and risks — automation of a big swath of the economy, a great acceleration of innovation, including AI research itself, and potentially catastrophic consequences for humanity from uncontrollable superintelligence.

In this essay, we argue that AGI will be exactly the opposite — it is unobservable because there is no clear capability threshold that has particular significance; it will have no immediate impact on the world; and even a long-term transformation of the economy is uncertain…

…One argument for treating AGI as a milestone — and taking declarations of AGI seriously — is that AGI could lead to rapid economic impacts, both positive and negative, such as a world without scarcity, an end to the concept of money, or sudden mass joblessness.

But AI’s economic impact is only realized when it is adopted across the economy. Technical advances are necessary, but not sufficient, to realize this impact. For past general-purpose technologies, such as electricity, computing, and the internet, it took decades for the underlying technical advances to diffuse across society. The miracle of the Industrial Revolution wasn’t the high growth rate — annual growth rates averaged below 3% — but the sustained period of decades of growth.

There are many bottlenecks to the diffusion of AI: developing useful products and applications, training the workforce to utilize these products, implementing organizational changes to enable AI use, and establishing laws and norms that facilitate AI adoption by companies. Like past general-purpose technologies, we expect the economic impacts of AI to be realized over decades, as this process of diffusion unfolds…

…The US and China are often described as being in an AI arms race, with each country racing to build AGI. It is hypothesized that the country to build it first would have a decisive strategic advantage — resulting in dominance in the world order for the foreseeable future.

This narrative doesn’t make sense because the knowledge required to create AI models, and model capabilities themselves, tend to proliferate quickly between countries. There are hundreds of thousands of AI technologists, and they work in the private sector rather than government labs, so it is not feasible to keep secrets at that scale.

Invention — in this case, AI model development — is overrated as a source of competitive advantage…

…While Chinese AI companies are at most 6-12 months behind leading US companies in terms of AI models and capabilities, China lags significantly behind the US in several key indicators that might enable diffusion: Digitization, cloud computing adoption, and workforce training. All of these are required to enable the productive diffusion of AI advances across industries. This is the actual source of American competitive advantage.

Of course, this could change in the coming years. But if it does, it will result from policy changes to promote diffusion rather than the development of AGI…

…Even if it doesn’t have immediate economic impacts, could AGI unlock, say, 10% annual GDP growth that could add up to something big over a few decades?

Maybe. But it is far from clear why and how this will happen.

Historically, this kind of acceleration in growth has happened very few times — the industrial revolution had this effect, but not the internet, which barely had any impact on GDP. Note that even if you don’t think that GDP is the right thing to measure, a qualitative change in the GDP growth rate is a good proxy for whatever fundamental change in the economy you care about.

The problem is that accelerating growth requires eliminating bottlenecks to progress. That’s harder than most AI boosters assume. AI will likely have uneven effects across sectors, and long-term growth will be bottlenecked by the weakest sector…

…More broadly, progress depends not just on the technology but on having the right preconditions — complementary innovations as well as cultural, economic, and political factors. If all it took to create the industrial revolution was the invention of steam power, the Roman Empire would have done it.

Our current laws, norms, institutions, and politics evolved in a time of much less technological potential. They are already choking opportunities for straightforward types of growth, such as building more public infrastructure. To reap the economic benefits that broad cognitive automation can potentially bring, the degree of structural change that needs to happen is unfathomably greater…

…On the flip side, AGI could be a turning point for AI’s societal risks. Could it cause loss of control, massive societal harm, or even human extinction?

Discussions of AGI risks conflate power — the ability to modify the environment — with capability — the capacity to solve specified tasks correctly. Capability is an intrinsic property of an AI system, whereas power is a matter of how we design the environment in which AI systems operate. And humans have agency over this design. This distinction is often overlooked…

…We do expect AI capabilities to keep increasing. But regardless of capability level, we can choose to ensure that AI remains a tool and is not given power and autonomy to operate without human oversight. In the AI as Normal Technology essay, we address all the usual counterarguments to this, including arms races among companies, power seeking, superhuman persuasion, deceptive alignment, and more.

We argue in the paper that there will be strong business incentives against deploying AI without adequate oversight, and that these incentives can and should be buttressed by regulation when necessary. This has historically been the case in areas ranging from self-driving cars to AI assistants. We don’t expect this trend to suddenly flip once AI capabilities reach a presumed tipping point that we arbitrarily designate as AGI…

…Yet another reason to consider AGI a milestone is the view that shortly after we build AGI, AI systems could recursively self-improve — AGI could train future versions of models that become far more capable, leading to an “intelligence explosion.” Soon afterwards, we would get superintelligent AI (AI systems that far exceed human abilities on any conceivable task), leading to either utopia or dystopia, depending on how well superintelligent AI is “aligned” with human interests.

In the normal technology view, there are two big reasons to doubt this narrative. The first is that even if arbitrary speedups in AI methods are possible, we think that innovation and diffusion will happen at human speed…

…Second, the fact that AI would help conduct AI research does not imply that this process can be arbitrarily accelerated. AI is already used to automate a significant portion of AI research today. But there are many bottlenecks to progress in AI methods, such as the social nature of data collection and real-world interaction that might be required for achieving certain capabilities, computational and cost limits, or herding around popular or intuitive ideas while ignoring the ones that enable true breakthroughs.

We could be wrong about this, and recursive self-improvement could be possible, leading to unbounded speedups in progress in AI methods. And this might have some interesting implications, including some discontinuities in impact, even if widespread diffusion will be slower. For these reasons, it is important to have early warning systems for recursive self-improvement…

…OpenAI’s 2018 definition of AGI was “highly autonomous systems that outperform humans at most economically valuable work”. From our perspective — our interest being in the impacts of AI — this definition is potentially very useful. If AI outperformed [all] humans at most economically valuable work, it would be unquestionably impactful.

But let’s be clear — this is not a property of an AI system. It is a property of the state of the world. It has at least as much to do with the complementary innovations that we make and the extent to which we choose to integrate AI into our organizations and institutions. It would be absurd to try to test an AI system in isolation in the lab and ask whether it outperforms people at their jobs. It is a category error.

For example, whether AI can (autonomously) outperform a medical researcher depends in part on whether we collectively choose to allow AI systems to perform large-scale medical experiments on people. We shouldn’t and we won’t, which means that irrespective of the systems’ capabilities, they cannot perform the function of a medical researcher. This might be an extreme example, but similar bottlenecks arise in virtually every job.

2. The Lesson in Buffett’s Winning Apple Bet – Sarah Krouse

A Berkshire investment manager bought a small stake in the iPhone maker in 2016, nine years after its introduction. Around that time, Buffett asked another investment manager to find an S&P 500 stock that met three criteria.

Buffett wanted a company with a reasonably cheap price/earnings multiple of no more than 15, based on the next 12 months’ projected earnings, The Wall Street Journal previously reported. Berkshire managers had to be at least 90% sure that the stock would generate higher earnings over the next five years. And he wanted Berkshire to be at least 50% confident that the company would grow a minimum of 7% annually for at least five years.

The manager’s research pointed to Apple.

The stock was already a winner by then—and not a huge bargain. It traded for about 14 times its expected earnings, on the higher end of the range of what Buffett had been looking for. Some investors had sold after capturing gains.

And Buffett, a flip-phone user at the time, was hardly a techie. But he saw the hold the company had on its customers. Buffett’s grandchildren were iPhone devotees, and Apple’s customer retention rate was about 95%.

3. What happens when a nation built on growth runs short of babies? – Nina Chen

China’s plummeting birthrate can be traced to three interlocking factors that form a vicious cycle: the shrinking pool of childbearing-age women, collapsing marriage rates, and evaporating fertility intentions. These elements don’t merely add up – they multiply each other’s downward momentum, creating what demographers call a “triple demographic shock.”…’

… The number of women in their prime reproductive years (20-29) has undergone a staggering contraction, halving from 12.51 million in the 1990 birth cohort to just 6.33 million for those born in 2003. This dramatic shrinkage, a direct consequence of strict family planning policies after 1987, represents an irreversible demographic reality…

…China’s marriage rate has collapsed to a historic low of 4.3 marriages per 1,000 people in 2024—less than half its 2013 peak (9.9‰). This places China alongside Japan and South Korea (4.2-4.3‰) but significantly below the U.S. (5.1‰), reflecting broader East Asian demographic trends…

…The average age of first marriage for women has jumped from 24 in 2010 to 28.2 in 2023, with over 30% now marrying after 30—directly truncating peak fertility years (25-29)…

… In major cities, saving for a marital home down payment now consumes 15-20 years of family income, while betrothal gifts (bride prices) often exceed 300-500% of annual household earnings—creating what amounts to a brutal financial gatekeeping system…

…China’s marriage collapse directly strangles fertility—pushing the total fertility rate (TFR) to a catastrophic 1.0, far below both OECD averages (1.5) and Japan (1.2). This crisis stems not from changing individual preferences but from structural contradictions between progressive education and regressive social systems.

Higher education expansion has reshaped demographics: female tertiary enrollment rates exploded from 3.4% in 1990 to 59.6% in 2022, with each additional year of education reducing desired fertility by 0.26 children. Paradoxically, within each educational cohort, women’s fertility intentions have actually increased since 2010, according to a research made by MetroData. The aggregate decline occurs because higher-education groups—who have fewer children—now dominate the population…

…Groundbreaking research reveals the severe professional tradeoffs Chinese women face when starting families. According to the 2023 Report on Chinese Women’s Career Development, a rigorous 2021 study published in Population & Economics (a Peking University core journal) demonstrates that each child born to middle-income families reduces mothers’ employment probability by 6.6% for the first child and an additional 9.3% for the second—even after controlling for education, region, and household characteristics. Notably, children show no statistically significant impact on fathers’ employment prospects…

…When discussing the impacted industries, I’ve found that in most cases, the decline in newborn numbers is not the root cause of their struggles—rather, it serves as a catalyst, exposing and amplifying pre-existing structural weaknesses within these sectors…

…Maternity service pricing remains at levels set during the midwife era of the 1950s, yet hospitals must maintain modern, 24/7 medical teams. This “high-cost, low-return” operation previously relied on overwhelming patient volume to break even. However, with national newborn numbers dropping below 9 million in 2023, the fatal flaw was exposed. Data from a Shanghai specialist hospital shows obstetricians’ incomes have fallen 20-30%, with bonuses halved during low seasons…

… The CMI index and tier-4 surgery metrics in public hospital evaluations contradict maternity care’s core mission of “prevention-first, safety-focused” care. As one tertiary hospital administrator admitted, “Achieving 98% natural delivery rates comes at the cost of bottom-tier performance evaluations.”

4. Mark Zuckerberg – Meta’s AGI Plan – Dwarkesh Patel and Mark Zuckerberg

Mark Zuckerberg: I’m also excited about the Behemoth model, which is coming up. It’s going to be our first model that’s sort of at the frontier—more than 2 trillion parameters…

…Mark Zuckerberg: In general, the prediction that this would be the year open source generally overtakes closed source as the most used models out there, I think that’s generally on track to be true. One interesting surprise—positive in some ways, negative in others, but overall good—is that it’s not just Llama. There are a lot of good ones out there. I think that’s quite good. Then there’s the reasoning phenomenon, which you’re alluding to talking about o3, o4, and other models. There’s a specialization happening. If you want a model that’s the best at math problems, coding, or different things like those tasks, then reasoning models that consume more test-time or inference-time compute in order to provide more intelligence are a really compelling paradigm…

…Mark Zuckerberg: One of the things we’ve generally tried to do over the last year is anchor more of our models in our Meta AI product north star use cases. The issue with open source benchmarks, and any given thing like the LM Arena stuff, is that they’re often skewed toward a very specific set of uses cases, which are often not actually what any normal person does in your product. The portfolio of things they’re trying to measure is often different from what people care about in any given product…

…Mark Zuckerberg: I think a lot of them are quite easily gameable. On the Arena you’ll see stuff like Sonnet 3.7, which is a great model, and it’s not near the top. It was relatively easy for our team to tune a version of Llama 4 Maverick that could be way at the top. But the version we released, the pure model, actually has no tuning for that at all, so it’s further down. So you just need to be careful with some of these benchmarks. We’re going to index primarily on the products…

…Mark Zuckerberg: There’s a space which, if I had to guess, I think will end up being the most used one: quick, very natural to interact with, natively multimodal, fitting throughout your day in the ways you want to interact with it…

…Mark Zuckerberg: If you fast-forward a few years, I think we’re just going to be talking to AI throughout the day about different things we’re wondering about. You’ll have your phone. You’ll talk to it while browsing your feed apps. It’ll give you context about different stuff. It’ll answer your questions. It’ll help you as you’re interacting with people in messaging apps. Eventually, I think we’ll walk through our daily lives and have glasses or other kinds of AI devices and just seamlessly interact with it all day long…

…Mark Zuckerberg: I would guess that sometime in the next 12 to 18 months, we’ll reach the point where most of the code that’s going toward these efforts is written by AI. And I don’t mean autocomplete. Today you have good autocomplete. You start writing something and it can complete a section of code. I’m talking more like: you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already…

…Mark Zuckerberg: Part of what I generally disagree with on the fast-takeoff view is that it takes time to build out physical infrastructure. If you want to build a gigawatt cluster of compute, that just takes time. NVIDIA needs time to stabilize their new generation of systems. Then you need to figure out the networking around it. Then you need to build the building. You need to get permitting. You need to get the energy. Maybe that means gas turbines or green energy, either way, there’s a whole supply chain of that stuff…

…Mark Zuckerberg: One of my core guiding principles in designing products is that people are smart. They know what’s valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure you design your product well to minimize that. But if you think something someone is doing is bad and they think it’s really valuable, most of the time in my experience, they’re right and you’re wrong. You just haven’t come up with the framework yet for understanding why the thing they’re doing is valuable and helpful in their life…

…Mark Zuckerberg: Here’s one stat from working on social media for a long time that I always think is crazy. The average American has fewer than three friends, fewer than three people they would consider friends. And the average person has demand for meaningfully more. I think it’s something like 15 friends or something. At some point you’re like, “All right, I’m just too busy, I can’t deal with more people.” But the average person wants more connection than they have…

…Dwarkesh Patel: If China is better at physical infrastructure, industrial scale-ups, getting more power and more data centers online, how worried are you that they might beat us here?

Mark Zuckerberg: It’s a real competition. You’re seeing industrial policies really play out. China is bringing online more power. Because of that, the US really needs to focus on streamlining the ability to build data centers and produce energy. Otherwise, I think we’ll be at a significant disadvantage. At the same time, some of the export controls on things like chips, I think you can see how they’re clearly working in a way. There was all the conversation with DeepSeek about, “Oh, they did all these very impressive low-level optimizations.” And the reality is, they did and that is impressive. But then you ask, “Why did they have to do that, when none of the American labs did it?” It’s because they’re using partially nerfed chips that are the only ones NVIDIA is allowed to sell in China because of the export controls. DeepSeek basically had to spend a bunch of their calories and time doing low-level infrastructure optimizations that the American labs didn’t have to do…

…Mark Zuckerberg: We made the Llama Scout and Maverick models certain sizes for a specific reason. They fit on a host and we wanted certain latency—especially for the voice models that we’re working on—that we want to pervade everything we’re doing from the glasses to all of our apps to the Meta AI app and all that stuff. There’s a level of control of your own destiny that you only get when you build the stuff yourself…

…Mark Zuckerberg: You also asked, would it not be important anymore because other people are doing open source? On this, I’m a little more worried. You have to ask yourself this. For anyone who shows up now and is doing open source—now that we have done it—would they still be doing open source if we weren’t doing it?…

…Mark Zuckerberg: I think these models encode values and ways of thinking about the world. We had this interesting experience early on, where we took an early version of Llama and translated it. I think it was French, or some other language. The feedback we got from French people was, “This sounds like an American who learned to speak French. It doesn’t sound like a French person.” And we were like, “what do you mean, does it not speak French well?” No, it speaks French fine. It was just that the way it thought about the world seemed slightly American. So I think there are these subtle things that get built into the models. Over time, as models get more sophisticated, they should be able to embody different value sets across the world. So maybe that’s not a particularly sophisticated example, but I think it illustrates the point. Some of the stuff we’ve seen in testing some of the models, especially coming out of China, have certain values encoded in them. And it’s not just a light fine-tune to change that…

…Mark Zuckerberg: There’s a whole different set of issues around coding, which is the other verifiable domain. You need to worry about waking up one day and if you’re using a model that has some tie to another government, can it embed vulnerabilities in code that their intelligence organizations could exploit later? In some future version you’re using a model that came from another country and it’s securing your systems. Then you wake up and everything is just vulnerable in a way that that country knows about and you don’t. Or it turns on a vulnerability at some point. Those are real issues…

…Mark Zuckerberg: You can basically take a model that’s much bigger, and capture probably 90 or 95% of its intelligence, and run it in something that’s 10% of the size…

…Mark Zuckerberg: There are going to be business models at each point along the spectrum. At Meta, for the consumer piece we definitely want to have a free thing. I’m sure that will end up being ad-supported. But I also think we’re going to want to have a business model that supports people using arbitrary amounts of compute to do even more amazing things than what it would make sense to offer in the free service. For that, I’m sure we’ll end up having a premium service…

…Mark Zuckerberg: AI is interesting because, more than some of the other stuff that we do, it is more research and model-led than really product-led. You can’t just design the product that you want and then try to build the model to fit into it. You really need to design the model first and the capabilities that you want, and then you get some emergent properties. Then it’s, “Oh, you can build some different stuff because this turned out in a certain way.” At the end of the day, people want to use the best model…

…Dwarkesh Patel: Will tariffs increase the cost of building data centers in the US and shift buildouts to Europe and Asia?

Mark Zuckerberg: It is really hard to know how that plays out. I think we’re probably in the early innings on that, and it’s very hard to know…

…Mark Zuckerberg: We have almost three and a half billion people using our services every day. One question we’ve struggled with forever is how do we provide customer support? Today, you can write an email, but we’ve never seriously been able to contemplate having voice support where someone can just call in. I guess that’s maybe one of the artifacts of having a free service. The revenue per person isn’t high enough to have an economic model where people can call in… But let’s say AI can handle 90% of that. Then if it can’t, it kicks it off to a person. If you get the cost of providing that service down to one-tenth of what it would’ve otherwise been, then maybe now it actually makes sense to do it. That would be cool. So the net result is that I actually think we’re probably going to hire more customer support people. The common belief is that AI will automate jobs away. But that hasn’t really been how the history of technology has worked. Usually, you create things that take away 90% of the work, and that leads you to want more people, not less.

5. The Best OTC Investment Story Never Told – Joe Raymond

MN&C started making a market in Best Lock (BLOC) in the mid-1970s.

Market makers provide bids and offers on select stocks, facilitating trading and liquidity. They earn a profit on the spread (the price between the bid and offer)…

…The mid-1970s was a good time to find bargains, and BLOC certainly looked like a bargain. It was trading for around 3-5x earnings and a discount to book value.

Best Lock was a simple business. It designed, manufactured, and marketed lock mechanisms, primarily for doors…

…The annual report showed over 4,000 shareholders of record, yet MN&C was only getting a few orders a year.

Where were all the shareholders and why wasn’t there more volume in the stock?…

…Best Lock was founded in Seattle in 1922 by Frank E. Best.

Like many startups in the 1920s, shares were sold door-to-door to average citizens.

When the Depression hit, Best Lock stopped paying dividends. Then the company moved its headquarters from Seattle to Indianapolis to be closer to suppliers and customers…

…By the late 1970s when Martin was looking at the shareholder list, nearly 50 years had passed and the company was again profitable, growing, and paying dividends.

After going through the Depression, World War II, moving to Indianapolis, and the Seattle address overhaul, many shareholders had been lost.

In many cases, heirs had no idea they inherited the stock…

…Martin knew he had an opportunity on his hands: an illiquid stock with lost shareholders trading for a low-single-digit P/E multiple.

He decided to form a new company dedicated to finding the rightful owners of these shares. This involved genealogical research and many hours spent at the local library and county records office…

…Best Lock was trading for around $30 per share at the time, so after his one third fee Martin was buying the shares for around $20. This equated to 2-3x earnings.

Over time, Martin was able to acquire roughly 15% of the float (shares not held by the Best family) using this approach…

…A few years after taking full control, Russell decided to take the company private.

He did this through a series of reverse splits in 1998 that effectively cashed everyone out for $525 per share—a high-single-digit multiple of earnings. The stock had been trading for $300 prior to the reverse splits, so the cash out price was a nice 75% premium.

A group of minority shareholders dissented and perfected their appraisal rights in Delaware—arguing that Russell Best had violated his fiduciary duty, and that the $525/share figure was too low for a company of Best Lock’s caliber.

At some point in the legal process, Russell decided to explore a sale of the entire company.

Stanley Black & Decker stepped up to the plate and offered $310 million to buy Best Lock (more than triple the reverse split takeout price). Final payout for the dissenting shareholders was received in April 2003.

Those initial shares Martin was buying for $20 in 1980 turned into $1,597 in 2003, good for a CAGR of 20% before dividends over the 23-year period.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Apple and Meta Platforms. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com