What We’re Reading (Week Ending 26 October 2025)

What We’re Reading (Week Ending 26 October 2025) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the  world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 26 October 2025):

1. Sanity Check – The Brooklyn Investor

When I say that if 10-year Treasury yields stay at around 4%, then the market should average a P/E of around 25x over time, it sounds crazy, but given that the market has traded at 22-23x P/E in the last 35 years, this is not so crazy to me.

Of course, one can come back to me and say, well, as much as 22-23x P/E was shocking in 1990, who’s to say that the market can’t shock us again, going back to 6-7% interest rates and 14x P/E ratio over the next 20? This is also true. I can’t say that can’t happen. But I’ve always said that I think 4% or so 10-year rate seems reasonable given 4% nominal GDP growth over time.

So, given that, how does the market look today? The market today looks like it is priced correctly. The 10-year Treasury rate is 4% today, and the S&P 500 index P/E is 25.5x, almost exactly where it should be according to the model. Next year’s estimate P/E is 22x.

In past bubbles, the rubber band was stretched. The table below is from an earlier post. Just before Black Monday, the rubber band was stretched as 10-year rates spiked to close to 10% while the earnings yield declined to 4.7%, creating a near 5% gap. On a price basis, the market was overvalued by 100%! During the internet bubble, the gap increased to 1.5% and the market was overpriced 40%. Today, there is no stretch in the rubber band…

…So, the other thing is all this talk about an AI bubble. It is really interesting and I have no idea what is going to happen. But there seems to be two extreme views that both can’t be right. On the one hand, some people fear all these trillions being invested into AI infrastructure (energy, data centers etc.) will not offer decent returns on investments as there is still very little revenue associated with many of these big AI models. On the other hand, there is a big fear that AI will wipe out entire industries. There are already reports that huge increases in productivity is being actualized in the coding world, so much that the word is that entry level computer science positions are completely gone, wiped out. Big tech have also been firing a lot of engineers as they are replaced by AI.

They both can’t be right.

Here’s some bogus math, just as a sanity check too. Let’s say AI replaces 10% of jobs in the U.S. There are 160 million workers in the U.S. Ridding 10% of them is 16 million jobs gone. Many of these replaced jobs will be office jobs (well, AI will eventually replace Uber and truck drivers, farmers, factory workers too). Let’s say office workers cost companies $100K / year, including benefits. I’ve heard this somewhere before. That’s $1.6 trillion in expenses that you can cut. How much would you be willing to invest to cut $1.6 trillion?

You are now talking about trillions of dollars in investments. Now, all those numbers people throw around don’t sound so silly anymore. Of course, you can’t just say spending $10 trillion to eliminate $1.6 trillion is a 16% return on investment, as AI costs money to keep running / maintaining and you need to replace servers every 3-5 years etc. But still, you start to see the magnitude of what can happen if AI really starts to replace workers.

2. How Silver Flooded the World – Tomas Pueyo

A century earlier, around 1350, the Black Death spread across Europe, killing 25-50% of its population.

Men were dying, but coins were not.—David Herlihy

For a century, there was more coin than people, so they didn’t notice when silver and gold production slowed down. But it did; first, because of fewer miners.

Second, because mines ran out of gold and silver.

Third, the supply of gold from Africa collapsed after the Mali Empire civil war in the 1360s and the Songhai Empire instability.

Fourth, mines in southeastern Europe, in Serbia and Bosnia, fell to the Ottoman Empire.

So new sources of silver and gold shrunk. Meanwhile, silver sinks continued. Europeans kept buying Chinese silks, Indian cotton cloth, dyes, and spices, Middle Eastern sugar and drugs… But Europeans had little to export: wine, slaves, wood, salt, and little more. Italian traders paid one third in merchandise and two thirds in precious metals.

As silver and gold became scarcer, people started debasing the currency: Diluting it with other metals, clipping its edges…

Between the Black Death, the scarcity of metals, the debasement of currency, the incessant warfare, and taxes, people did everything they could to hoard and hide their precious metals, whether through hidden coins, filled chests, plates, and any other conceivable way…

…When we say some resource is exhausted, what we generally mean is… with current technology. People abandoned mines when they couldn’t figure out how to reach more ore, or when they couldn’t get more metal out of them.

One of the most typical issues was that ore is in mountains, but mountains also have something else: rain. Mining shafts would get flooded, so mining was restricted to the surface…

…Romans knew about waterwheels and pumps, but they never used them for extracting water out of mines. Central Europeans put them together into ever more complex systems to dry up mines and extract more ore…

…But there were two significant innovations that allowed Europe to increase its silver production by 5x between the 1460s and the 1540s.

Both innovations were new processes to extract more silver from ore. The first one is called liquation, and was first discovered in southern Germany in the mid-1400s, just as the Great Bullion Famine was hitting hardest. Of course, that’s not a coincidence: It was the bullion famine that was spurring mining innovation. Within 15 years, it had spread throughout Germany, Poland and the Italian Alps…

…We’ve already explored how Portugal’s discovery of an alternative path to Asia was made to bypass the Ottomans, who had taken control of Istanbul and blocked Christian trade through the Silk Road. But that trade still required gold and silver, and Europe didn’t have any. So the Portuguese were also looking for gold and silver deposits to mine. They found some in Western Africa—remember the Mali Empire—but that was not enough.

Now you know why Spanish Conquistadors were so obsessed about finding gold and silver in the Americas. It was not just a matter of greed. It was an existential matter for Europeans after the Great Bullion Famine. This is why Columbus mentioned gold 65 times in his diaries!

Spaniards didn’t find much gold in the Americas, but they did find silver. Unfortunately, the high-quality silver ore quickly ran out, and Spaniards were left with ore that didn’t contain enough silver to be extracted.

That’s when they invented a new technique to get more silver from the lower quality ore: amalgamation, via the patio process.

3. Microsoft’s Cloud & AI Head on the AI Buildout’s Risks and ROI (Transcript Here) – Alex Kantrowitz and Scott Guthrie

Alex Kantrowitz: I totally understand that, but I have to go back to the diminishing returns of training question. Where do you stand on that?

Scott Guthrie: If you look at training broadly, I think you’re going to continue to see more value from the models by doing more training. But going back to my answer earlier, I don’t know if that’s always going to be pre-training. I think increasingly lots of post-training activities are going to significantly change the value of the model. By post-training I mean take the base model and how do you add financial data or healthcare data or something that’s very specific to an application or a use case.

What’s nice about post-training is that you don’t have to do it in one large data center in one location. Part of the technique that we’ve been focused on is how do we take this inferencing capacity around the world and a lot of it is idle at night as people go to sleep. How are we doing increasingly post-training in a distributed fashion across many different sites? Then when employees come to work in the morning, we serve the applications. Having that kind of flexibility and being able to dynamically schedule your AI infrastructure so that you’re maximizing revenue generation and training ideally in a very swappable dynamic way—I think is one of the things we’re investing in heavily and I think is one of the differentiators for Microsoft.

Alex Kantrowitz: Okay, but you’ll forgive me for going back to this scaling pre-training question. I’m just trying to see what you believe here. You haven’t said it outright, but from your answers, it does seem to me like you believe that spending wildly on scaling pre-training is a bad bet.

Scott Guthrie: I wouldn’t necessarily say that. I think we’ve definitely seen as the scale infrastructure for pre-training has gotten bigger, we are seeing the models continually improve and we’re investing in those types of pre-training sites and infrastructure. We recently, for example, announced our Fairwater data center regions around the US. We have multiple Fairwaters. We did a blog post recently of one of our new sites in Wisconsin. These are hundreds of megawatts, hundreds of thousands of the latest GB200s and GB300 GPUs. We think the largest contiguous block of GPUs anywhere in the world in one giant training infrastructure that can be used for pre-training. We’re investing heavily in that, as you could see from the photos from the sky in terms of massive infrastructure. We do continue to see the scaling laws improve.

Now will the scaling laws improve linearly? Will they improve at the rate that they have? I think that is a question that everyone right now in the AI space is still trying to calculate. But do I think they’ll improve? Yes. The question really around what’s the rate of improvement on pre-training? I do think with post-training, we’re going to continue to see dramatic improvements. That’s again why we’re trying to make sure we have a balanced investment both on pre-training and post-training infrastructure.

Alex Kantrowitz: Just to parse your words here, you can see improvement by doubling the data center, but that’s why I use the word bet—because are you going to get the same return if it doesn’t improve exponentially and just improves on the margins? That I think is the big question right now, right?

Scott Guthrie: It’s a big question. The thing that also makes it the big question is it’s not like a law of nature that’s immovable. There could be one breakthrough that actually changes the scaling laws for better, and there could be a lack of breakthroughs that means things will still improve but do they improve at the same rate that they historically did from a raw size and scale perspective? That is the trillion dollar question…

…Scott Guthrie: Yeah, going back to the comments we had earlier on balance, I think as you think about your GPU buildout, one of the things that we think about is the lifetime of the GPU and how we use it. What you use it for in year one or two might be very different than how you use it in year three, four, five, or six. So far we’ve always been able to use our GPUs, even ones that we deployed multiple years ago, for different use cases and get positive ROI from it. That’s why our depreciation cycle for GPUs is what it is…

…Scott Guthrie: If you are for example building one large data center that only does training and it’s not connected to a wide area network around the world that’s close to the users, it’s hard to use that same infrastructure for inferencing because you can’t go faster than the speed of light. Someone elsewhere around the world that wants to call that GPU—if you don’t have the network to support it, you can’t use it for those inferencing needs…

…Alex Kantrowitz: Okay. All right. It’s good to get something definitive on that. You mentioned your 39% Azure growth. I’m looking at your quarterly numbers every quarter and often talking about them on CNBC and the numbers are massive. The other side of it though is that’s spend coming from clients, right? There have been multiple studies that have come out recently that have talked about how enterprises aren’t getting the ROI that they’ve anticipated on their AI projects yet. When you see those studies, do they ring true to you? How do you react to them?

Scott Guthrie: I think when you say AI in general, it’s a very broad statement.

Alex Kantrowitz: This is in large part generative AI where companies everywhere have tried to adopt LLMs and try to put some version of that into play. It’s not recommender engines basically.

Scott Guthrie: But I think what you need to do is double-click even further from GenAI to GitHub Copilot or healthcare or Microsoft 365 Copilot or security products built with GenAI. I do think ultimately, the closer you can double-click on is this really delivering ROI, then you have much more precise data.

I do think a lot of companies have dabbled or done internal proof of concepts and some of them have paid off and some of them haven’t. But I think ultimately a lot of the solutions that are paying off that we continually hear from our clients and our customers are a bunch of the applications that we’ve built. Similarly, a bunch of the applications that our partners have built on top of us. Ultimately the Azure business is consumption-based, meaning if people aren’t actually running something, we don’t get paid. It’s not like they’re pre-buying a ton of stuff. We recognize our revenue based on when it’s used.

The good news is when you look at our revenue growth, it’s not a bookings number. It’s actually a consumption number. You can tell that people are consuming more. The last two quarters, our revenue growth has accelerated on a big number. That is a statement of the fact that I think people are getting a lot of ROI, at least with the projects that they’re running on top of our cloud…

…Scott Guthrie: I think increasing the number of tokens you can get per watt per dollar is going to be the game over the next couple years. Maximizing the ability of our cloud to deliver the best volume of tokens for every watt of power, for every dollar that’s spent—where the dollar is spent on energy, it’s spent on the GPUs, it’s spent on the data center infrastructure, it’s spent on the network, and it’s spent on everything else—is the thing that we’re laser-focused on. There’s a bunch of steps as part of that, GPUs being a critical component of it.

One of the things that our scale gives us the ability to do is to invest for nonlinear improvements in that type of productivity and that type of yield. If you’ve got a million dollars of revenue on a couple hundred GPUs, you’re not going to be investing in custom silicon. When you’re at our scale, you will be. You’re not just investing in custom silicon for GPUs for pre-training or for inferencing. You’re looking at what could we be doing for synthetic data generation with silicon. What can we be doing from a network compression perspective with custom silicon? What can we be doing from a security perspective?

We have bets across all of those, many of which are now in production and are actually powering a lot of these AI experiences. In fact, I think every GPU server that we’re running in the fleet right now is using custom silicon at the networking, compression, storage layer that we’ve built. The GPUs themselves are also going to be a prize that people are going to try to optimize—the actual instructions for doing the GPUs.

Nvidia is a fantastic partner of ours. We’re probably one of, if not the biggest customer in the world of theirs. We partner super deeply with Jensen and his team. At the same time, and partly why they’re so successful is they’re executing incredibly well. If you look at the history of silicon, it’s rare to have a silicon company that every single year is doing the absolute perfect work that’s differentiated. Kudos to Jensen for what he’s done, and I know he’s going to keep trying to do it going forward. But there will be other opportunities from other companies where people are going to look for a niche that’s going to be big enough in this AI space to be truly differentiated versus what Nvidia is delivering. Then we’re doing our own silicon investment in-house because we’re going to be going after those same opportunities.

Ultimately, the way we’ve tried to build our infrastructure, none of our customers know when they’re using Microsoft 365 or GitHub or any open models what silicon they’re running on. We’re going to be constantly tuning the use cases based on the applications. If we find ways that are breakthroughs, we’re absolutely going to be taking advantage of them for those use cases. At our balance of scale and our balance of use cases, I’m very confident that we’re going to find use cases where custom silicon will make a difference. I’m also very confident we’re going to continue to be a great partner to Nvidia and others in the world that are going to be selling us great solutions.

4. The coming debt deluge? – Abdullah Al-Rezwan

For example, last week Meta entered in a Joint Venture (JV) with Blue Owl Capital for their $27-Billion Hyperion Data Center campus, of which Meta will own 20% and the rest will be owned by funds managed by Blue Owl Capital. Meta is signing an “operating lease” with an initial term of only four years. They have the option to extend the lease every four years, but they are not obligated to.

To persuade the JV to accept the short four-year leases, Meta provided a “Residual Value Guarantee” (RVG) covering the first 16 years of operations. If Meta decides to leave (by not renewing or terminating the lease) within the first 16 years, they guarantee the campus will still be worth a certain amount of money (undisclosed). This payment is “capped” i.e. there is a pre-agreed maximum limit to how much Meta would have to pay. Again, we don’t know the exact capped limit in this deal.

The structure of this deal, featuring short 4-year leases combined with a long-term RVG on a highly specialized asset, closely resembles a financial tool known as a Synthetic Lease.

In a synthetic lease, the tenant (Meta) gains the flexibility of short commitments and favorable accounting treatment (keeping the debt off their balance sheet). However, to convince investors (Blue Owl Capital) to fund the construction, the tenant must assume the majority of the financial risks of ownership. The RVG achieves this risk transfer. To secure financing for such a massive, specialized asset, this cap must be set very high. While we don’t know the exact number, my guess is it’s likely somewhere between 80% to 90%. If we assume it to be 85%, for the $27 Billion Hyperion campus, Meta’s maximum possible exposure is $22.95 Billion.

If Meta decides to terminate the lease within the 16-year RVG period, the payout is determined by the following calculation:

Guaranteed Value at time of exit – Actual Market Value = Shortfall

Meta pays the shortfall, but only up to the agreed-upon cap (estimated at $22.95B)…

…Given Meta’s backing, the bonds issued to fund this investment received investment grade credit rating. However, the bonds were issued at 6.58% yield which is closer to junk bond yield.

Why is the yield so high? If the value of the data center catastrophically collapses due to obsolescence or for some other reasons, Meta’s RVG covers most of the loss, but the investors bear the portion exceeding the cap. Moreover, the debt belongs to the project entity, it is “structurally subordinated” to Meta’s own corporate debt. Investors demand a higher yield to compensate for this “tail risk”.

More importantly, the underlying collateral is a hyper-specialized AI data center. If Meta leaves, it’s likely that the facility cannot be easily repurposed. While the RVG mitigates the financial loss, the specialized nature of the underlying asset still influences the perceived risk and pushes the yield higher.

My guess is Meta (and other big tech) will do more of these deals going forward. In fact, just yesterday, Oracle appears to be raising debt even larger than Hyperion deal: $38 Billion for building data centers in Texas and Wisconsin. If the deal goes through, it would be the largest debt deal so far in AI infrastructure.

5. Thoughts on the AI buildout – Dwarkesh Patel and Romeo Dean

With a single year of earnings in 2025, Nvidia could cover the last 3 years of TSMC’s ENTIRE CapEx.

TSMC has done a total of $150B of CapEx over the last 5 years. This has gone towards many things, including building the entire 5nm and 3nm nodes (launched in 2020 and 2022 respectively) and the advanced packaging that Nvidia now uses to make datacenter chips. With only 20% of TSMC capacity1, Nvidia has generated $100B in earnings…

…Further up the supply chain, a single year of NVIDIA’s revenue almost matched the past 25 years of total R&D and capex from the five largest semiconductor equipment companies combined, including ASML, Applied Materials, Tokyo Electron…

…For the last two decades, datacenter construction basically co-opted the power infrastructure left over from US deindustrialization. One person we talked to in the industry said that until recently, every single data center had a story. Google’s first operated data center was across a former aluminum plant. The hyperscalers are used to repurposing the power equipment from old steel mills and automotive factories.

This is honestly a compelling ode to capitalism. As soon as one sector became more relevant, America was quickly and efficiently able to co-opt the previous one’s carcass. But now we are in a different regime. Not only are hyperscalers building new data centers at a much bigger scale than before, they are building them from scratch, and competing for the same inputs with each other – not least of which is skilled labor…

…Labor might actually end up being the most acute shortage – we can’t simply stamp out more workers (at least, not yet).

The 1.2 GW Stargate facility in Abilene has a workforce of over 5,000 people. Of course, there will be greater efficiencies as we scale this up, but naively that looks like 417,000 people to build 100 GW. And that’s on the low end of 2030 AI power consumption estimates. We’re gonna need stadiums full of electricians, heavy equipment operators, ironworkers, HVAC technicians,… you name it.

For reference, there’s 800K electricians and 8 million construction workers in the US…

…Anthropic and OpenAI’s combined AI CapEx per year (being done indirectly, mostly by Amazon and Microsoft in 2025) seems to be around $100B.

Revenues for OpenAI and Anthropic have been 3xing a year for the past 2 years. Together, they are on track to earn $20B in 2025.

This means they’re spending 5 times as much on CapEx as they’re earning in revenue. This will probably change over time – more mature industries usually have CapEx less than sales. But AI is really fast growing, so it makes sense to keep investing more than you’re making right now.

Currently, America’s AI CapEx is $400B/year. For AI to not be a bubble in the short term, the datacenters currently being built right now need to generate $400B in revenue over their lifetime. Will they?…

…Do you think that AI models will be able to do much of what a software engineer does by the end of a decade? If the 27M Software engineers worldwide are all on super charged $1000/month AI agent plans that double their productivity (for 10-20% of their salary), that would be $324B revenue already…

…A key question is whether datacenters will go “off-grid”—generating power on-site rather than connecting to the utility grid. Some of the largest datacenters are already doing this, e.g., Meta’s Orion or XAI’s Colossus.

Why would datacenters want to make power themselves rather than relying on the grid? They’re trying to get around interconnection delays. Connecting large new electricity sources to the grid now takes over 5 years…

…What will the distribution of individual datacenter sizes be? Here’s the argument for why we might end up seeing what looks like a thick sprinkle of 100 MW datacenters everywhere:

  • If you can plop down a medium sized datacenter here and there, you can soak up any excess capacity in the grid. You can do this kind of arb with a 100 MW datacenter, but there’s no local excess capacity in the grid at the scale of 1 or 10 GW – that much power is on the scale of a whole grid itself.
  • For pretraining like learning, you want to have large contiguous blobs of compute. But already we’re moving to a regime of RL and midtraining, where learning involves a lot of inference. And the ultimate vision here is some kind of continual learning, where models are widely deployed through the economy and learning on the job/from experience. This seems compatible with medium sized datacenters housing 10s of thousands of instances of AIs working, generating revenue, and learning from deployment.

Here’s the other vision. 1-10 GW datacenters, and then inference on device. Basically nothing in between.

  • If we move to a world with vertically integrated industrial scale production of off-grid datacenters, maybe what you want to do is just buy a really big plot of land, build a big factory on site to stamp out as many individual compute halls and power/cooling/network blocks as possible. You can’t be bothered to build bespoke infrastructure for 100 MW here and there, when your company needs 50 GW total. A good analogy might be how a VC with billions to deploy won’t look at any deal smaller than deca millions…

…Why doesn’t China just win by default? For every component other than chips which is required for this industrial scale ramp up (solar panels, HV transformers, switchgear, new grid capacity), China is the dominant global manufacturer. China produces 1 TW of solar PV a year, whereas the US produces 20 GW (and even for those, the cells and wafers themselves are manufactured in China, and only the final module is assembled in the US).

Not only does China generate more than twice the electricity than the US, but that generation has been growing more than 10 times faster than in the US. The reason this is significant is that the power build out can be directed to new datacenter sites. China State Grid could collaborate with Alibaba, Tencent, and Baidu to build capacity where it is most helpful to the AI buildout, and avoid the zero-sum race in the US between different hyperscalers to take over capacity that already exists.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google), Amazon, ASML, Meta Platforms, Microsoft, and TSMC. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com