What We’re Reading (Week Ending 27 July 2025) - 27 Jul 2025
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 27 July 2025):
1. Introducing pay per crawl: Enabling content owners to charge AI crawlers for access – Will Allen and Simon Newton
Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?…
…We believe your choice need not be binary — there should be a third, more nuanced option: You can charge for access. Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale…
…Pay per crawl, in private beta, is our first experiment in this area.
Pay per crawl integrates with existing web infrastructure, leveraging HTTP status codes and established authentication mechanisms to create a framework for paid content access…
…At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable.
We expect pay per crawl to evolve significantly. It’s very early: we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards.
For example, a publisher or new organization might want to charge different rates for different paths or content types. How do you introduce dynamic pricing based not only upon demand, but also how many users your AI application has? How do you introduce granular licenses at internet scale, whether for training, inference, search, or something entirely new?
The true potential of pay per crawl may emerge in an agentic world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on HTTP response code 402, we enable a future where intelligent agents can programmatically negotiate access to digital resources.
2. How It’s Done – Doomberg
Among the critical minerals China has successfully cornered are the rare earth metals, and the primary means by which it achieved near-total dominance was by capturing the step at which the mined material—a concentrated mix of many valuable metals—is purified into individual components suitable for use in various military and industrial applications. Copious amounts of waste are produced along that processing journey, and treating such waste to Western standards became economically unfeasible at the market prices that prevailed after China entered the field. Last week, The New York Times caught on to how the game is played:
“Chinese mines and refineries produce most of the world’s rare earth metals and practically all of a few crucial kinds of rare earths. This has given China’s government near complete control over a critical choke point in global trade. But for decades in northern China, toxic sludge from rare earth processing has been dumped into a four-square-mile artificial lake. In south-central China, rare earth mines have poisoned dozens of once-green valleys and left hillsides stripped to barren red clay.”…
…With free markets clearly failing to price environmental and national security concerns—let alone the convergence of both—a completely new approach was needed to address the rare earth vulnerability. Last week brought the announcement of just such a move:
“The Defense Department will become the largest shareholder in rare-earth mining company MP Materials by buying $400 million of its stock and helping it build a new processing facility to sidestep the Chinese market, the company said Thursday. The deal underscores how far the Trump administration is willing to go to subsidize production of high-powered magnets, a field dominated by Chinese firms although the materials are critical for U.S. weapons systems.
Las Vegas-based MP Materials owns the only rare-earth mine in the United States, at Mountain Pass, California, near the Nevada border. MP Materials CEO Jim Litinsky said the company aims to restore the full rare-earth supply chain in the U.S. and eliminate a ‘single point of failure’ in the country’s military-industrial base.”
Perusing the company’s press release and other corporate filings, the details of the creative deal become clear. The Pentagon is taking a holistic approach to the objective, investing the capital needed for MP Materials to construct domestic processing and magnetic facilities while also putting a floor price under the company’s products that accounts for the cost of proper environmental stewardship:
“DoD has entered into a 10-year agreement establishing a price floor commitment of $110 per kilogram for MP Materials’ NdPr products stockpiled or sold, reducing vulnerability to non-market forces and ensuring stable and predictable cash flow with shared upside.
For a period of 10 years following the construction of the 10X Facility, DoD has agreed to ensure that 100% of the magnets produced at the 10X Facility will be purchased by defense and commercial customers with shared upside.”
3. Could AI slow science? -Sayash Kapoor and Arvind Narayanan
It’s a common-sense view, at least among technologists, that AI will speed science greatly as it gets adopted in every part of the scientific pipeline — summarizing existing literature, generating new ideas, performing data analyses and experiments to test them, writing up findings, and performing “peer” review…
…The impact of AI on science could be counterintuitive. Even if individual scientists benefit from adopting AI, it doesn’t mean science as a whole will benefit…
… So far, on balance, AI has been an unhealthy shock to science, stretching many of its processes to the breaking point.
Any serious attempt to forecast the impact of AI on science must confront the production-progress paradox. The rate of publication of scientific papers has been growing exponentially, increasing 500 fold between 1900 and 2015. But actual progress, by any available measure, has been constant or even slowing. So we must ask how AI is impacting, and will impact, the factors that have led to this disconnect.
Our analysis in this essay suggests that AI is likely to worsen the gap. This may not be true in all scientific fields, and it is certainly not a foregone conclusion…
…There’s something suboptimal about the way we’ve structured the practice of science, and so the efficiency of converting scientific inputs into progress is dropping. In particular, one subset of hypotheses flags the increase in the rate of production itself as the causal culprit — science is slowing down because it is trying to go too fast.
How could this be? The key is that any one scientist’s attention is finite, so they can only pay attention to a limited number of papers every year. So it is too risky for authors of papers to depart from the canon. Any such would-be breakthrough papers would be lost in the noise and won’t get the attention of a critical mass of scholars. The greater the rate of production, the more the noise, so the less attention truly novel papers will achieve, and thus will be less likely to break through into the canon…
…Another causal mechanism relates to scientists’ publish-or-perish incentives. Production is easy to measure, and progress is hard to measure. So universities and other scientific institutions judge researchers based on measurable criteria such as how many papers they publish and the amount of grant funding they receive. It is not uncommon for scientists to have to publish a certain number of peer-reviewed papers to be hired or to get tenure (either due to implicit norms or explicit requirements)…
…This completes the feedback loop: career incentives lead to researchers publishing more papers, and disincentivize novel research that results in true breakthroughs (but might only result in a single paper after years of work).
If slower progress is indeed being caused by faster production, how will AI impact it? Most obviously, automating parts of the scientific process will make it even easier for scientists to chase meaningless productivity metrics. AI could make individual researchers more creative but decrease the creativity of the collective because of a homogenizing effect. AI could also exacerbate the inequality of attention and make it even harder for new ideas to break through…
…The AI community often advertises AI as a silver bullet without realizing how difficult it is to detect subtle errors. Unfortunately, it takes much less competence to use AI tools than to understand them deeply and learn to identify errors. Like other software-based research, errors in AI-based science can take a long time to uncover. If the widespread adoption of AI leads to researchers spending more time and effort conducting or building on erroneous research, it could slow progress, since researcher time and effort are wasted in unproductive research directions.
Unfortunately, we’ve found that AI has already led to widespread errors. Even before generative AI, traditional machine learning led to errors in over 600 papers across 30 scientific fields. In many cases, the affected papers constituted the majority of the surveyed papers, raising the possibility that in many fields, the majority of AI-enabled research is flawed…
…Older modeling techniques required coming up with a hypothesis for how the world works, then using statistical models to make inferences about this hypothesis.
In contrast, AI-based modeling treats this process as a black box. Instead of making a hypothesis about the world and improving our understanding based on the model’s results, it simply tries to improve our ability to predict what outcomes would occur based on past data…
…AI-based modeling is no doubt helpful in improving predictive accuracy. But it doesn’t lend itself to an improved understanding of these phenomena. AI might be fantastic at producing the equivalents of epicycles across fields, leading to the prediction-explanation fallacy.
In other words, if AI allows us to make better predictions from incorrect theories, it might slow down scientific progress if this results in researchers using flawed theories for longer. In the extreme case, fields would be stuck in an intellectual rut even as they excel at improving predictive accuracy within existing paradigms…
…Researchers across fields are incentivized to find solutions to scientific problems. But this incentive only leads to progress because the process of proving theorems or finding solutions to problems also leads to building human understanding. As the desertion of work on foliations shows, when there is a mismatch between finding solutions to problems and building human understanding, it can result in slower progress.
This is precisely the effect AI might have: by solving open research problems without leading to the accompanying understanding, AI could erode these useful byproducts by reducing incentives to build understanding. If we use AI to short circuit this process of understanding, that is like using a forklift at the gym. You can lift heavier weights with it, sure, but that’s not why you go to the gym…
…If we use AI to bypass human understanding, or worse, retain only illusions of understanding, we might lose the ability to train new scientists, develop new theories and paradigms, synthesize and correct results, apply knowledge beyond science, or even generate new and interesting problems.
Empirical evidence across scientific fields has found evidence for some of these effects. For example, Hao et al. collect data from six fields and find that papers that adopt AI are more likely to focus on providing solutions to known problems and working within existing paradigms rather than generating new problems.
4. AI Comes Up with Bizarre Physics Experiments. But They Work – Anil Ananthaswamy
In the classical physics that describes our everyday world, objects have well-defined properties that are independent of attempts to measure those properties: A billiard ball, for example, has a particular position and momentum at any given moment in time.
In the quantum world, this isn’t the case. A quantum object is described by a mathematical entity called the quantum state. The best one can do is to use the state to calculate the probability that the object will be, say, at a certain location when you look for it there.
What is more, two (or more) quantum objects can share a single quantum state. Take light, which is made of photons. These photons can be generated in pairs that are “entangled,” meaning that the two photons share a single, joint quantum state even if they fly apart. Once one of the two photons is measured, the outcome seems to instantaneously determine the properties of the other — now distant — photon.
For decades, physicists assumed that entanglement required quantum objects to start out in the same place. But in the early 1990s, Anton Zeilinger(opens a new tab), who would later receive the Nobel Prize in Physics for his studies of entanglement, showed that this wasn’t always true. He and his colleagues proposed an experiment that began with two unrelated pairs of entangled photons. Photons A and B were entangled with each other, as were photons C and D. The researchers then devised a clever experimental design(opens a new tab) made of crystals, beam splitters and detectors that would operate on photons B and C — one photon from each of the two entangled pairs. Through a sequence of operations, the photons B and C get detected and destroyed, but as a product, the partner particles A and D, which had not previously interacted, become entangled. This is called entanglement swapping, which is now an important building block of quantum technology.
That was the state of affairs in 2021, when Krenn’s team started designing new experiments with the aid of software they dubbed PyTheus…
…The team represented optical experiments using mathematical structures called graphs, which are composed of nodes connected by lines called edges. The nodes and edges represented different aspects of an experiment, such as beam splitters, the paths of photons, or whether or not two photons had interacted.
Krenn’s team started by first building a very general graph, one that modeled the space of all possible experiments of some size. The graph had output features that represented some desired quantum state…
…The question, then, was how to modify all the other parts of the graph to produce this state. To figure this out, the researchers formulated a mathematical function. It took in the state of the graph and calculated the difference between the output of the graph and the desired quantum state. They then iteratively modified the graph’s parameters, which represented the experimental configuration, to reduce this discrepancy to zero.
When Krenn’s student Soren Arlt tried to use this approach to find the best way to do entanglement swapping, he noticed that the experimental configuration was unrecognizable — nothing at all like Zeilinger’s design from 1993. “When he showed it to me, we were confused,” Krenn said. “I was convinced that it must be wrong.”
The optimization algorithm had borrowed ideas from a separate area of study called multiphoton interference. By doing so, it created a simpler configuration(opens a new tab) than Zeilinger’s. Krenn’s team then did a separate mathematical analysis of the final design. It confirmed that the new experimental design would in fact create entanglement among particles with no shared past.
In December 2024, a team in China led by Xiao-Song Ma of Nanjing University confirmed it(opens a new tab). They built the actual experiment, and it worked as intended.
5. Get Smart: How to Profit in a Fast-Moving Stock Market – Chin Hui Leong
Here’s the good news: when it comes to investing, the winner is not always the one with the fastest fingers.
While news may reach your eyes faster, the actual change in businesses takes time to materialise.
Thus, even if you react faster, it doesn’t necessarily mean you will be right.
Need an example?
In my Business Time article last Wednesday, I highlighted how the initial hype over DeepSeek in late January 2025 has largely died down.
In the process, those who sold Nvidia (NASDAQ: NVDA) right after the DeepSeek news broke out will be rueing the fact that the GPU provider has delivered revenue gains of 78% and 69% year on year, respectively, for the past two quarters.
In turn, shares have risen by nearly 45% from their January low…
…In other words, slowing down, taking your time to assess the situation, and listening to the contrasting arguments will lead to better outcomes…
…But what if a threat turns out to be real and you were right to sell?
It’s possible, of course.
Here’s a common narrative: BlackBerry’s (NYSE: BB) reign as the go-to device in the corporate world was cut short by the rapid rise in popularity of Apple’s (NASDAQ: AAPL) iPhone and Alphabet’s (NASDAQ: GOOGL) Android…
…It’s easy to assume that the decline was immediate, but the opposite is true.
Between fiscal 2007 and fiscal 2011, the Canadian company’s sales actually soared by over sixfold from US$3 billion to almost US$20 billion.
In other words, Blackberry experienced a period of tremendous growth for over four years before its business began to falter.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet and Apple. Holdings are subject to change at any time.