What We’re Reading (Week Ending 05 May 2024)

What We’re Reading (Week Ending 05 May 2024) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 05 May 2024):

1. Karen Karniol-Tambour on investment anomalies at Sohn 2024 (transcript here) – Karen Karniol-Tambour and Jawad Mian

Jawad Mian (00:30): So 6 months ago equities were rallying in anticipation of lower interest rates but now we’ve seen year-to-date equities are rallying despite higher bond yields. So with a strong economy and inflation less of an issue, are you reverting to the typical inverse relationship between equities and bonds.

Karen Karniol-Tambour (00:49): The relationship between equities and bonds – it’s not an immutable fact of life. It’s not just a thing that occurs. It’s a function of the fundamental building blocks in stocks and bonds. When you look at stocks and bonds, they have a lot of things in common. They’re all future cash flows you’re discounting to today. So if you raise that, it’s bad for both and they both don’t do great when inflation is strong. The real inverse comes from their reaction to growth for the reason you’re saying. If growth is strong, then you can get equities rising and at the same time you can actually get the central bank tightening in response to that growth, which is bad for the bonds. And actually, the anomaly has been the years leading up to 2022, where inflation was just just a non-factor and the only dominant macro issue was growth. And so we’ve gotten really used to the idea that stocks and bonds have this inverse relationship. But that’s actually the anomaly. It’s not that normal to have a world where inflation just doesn’t matter. And finally, we live through this period where it’s like, “Wait a minute, inflation, its gravitational pull was at such a low level it was irrelevant – it’s becoming relevant again.” And we got this positive correlation where they both did badly, because you need to tighten in response to that inflation rearing its head.

Today – knock on wood – we look like we’re back to a world where inflation is not a non-issue, but it’s not a dominant issue, where we can have the kind of market action we’ve enjoyed so far in 2024, where we find out growth’s pretty damn resilient, growth’s doing great, companies can do well, earnings can do well, and at the same time the FED can ease less-than-expected or tighten relative to expectations at the same time. If they were tightening to stop very bad inflation, that would be a very different outcome. So the fundamental question as an investor is sort of where is the gravitational pull of inflation going to be? Is this going to be a major topic that then lead stocks and bonds sometimes to act the same way? Or is it going to go back to being kind of a non-issue?…

…Mian (02:53): A second anomaly. For the last 50 years, we’ve seen the US budget deficit average around 3% and it’s projected to be 6% over the next decade. So far we have seen markets being willing to finance these record deficits, in contrast to the UK for example. How come?

Karniol-Tambour (03:11): I think the best answer to this starts with actually the current account deficits, because obviously that’s part of who’s buying all the bonds we’re issuing. And it is a really weird anomaly because the United States is buying way more foreign goods than they’re buying ours. And typically if countries do that, their currency is weak because they have to convince someone to hold all the currency on the other side of that, so they have to attract all this financing. That the United States is running a massive current account deficit and yet the dollar is strong because what’s happening on the other end is people are just so enthusiastic about buying dollar financial assets. It’s so extreme that I think the United States has kind of a version of a Dutch disease.

So the classic Dutch disease is, you’re Saudi Arabia, you have oil. No one’s buying oil because you’re Saudi Arabia. No one’s thinking, “I really want Saudi oil.” They just need to fill up their car. So whatever the gas, is the gas is. But as Saudi Arabia, you get uncompetitive outside of it because money’s flooding in just for your oil, for nothing else. The United States has kind of become that on financial assets, which is people aren’t really thinking “I just want US financial assets.” It’s just that United States financial assets have done so well, they’re the dominant part of the index in stocks and in bonds. So anyone that needs to save any money around the world just ends up in US assets. As long as you care at all about market cap – which anyone reasonable would – and you’re going to the big market around the world, if you’re saving, you’re giving the United States money. And so we’re ending up with this flood of money that is a huge anomaly where we actually have a rising currency making everything else kind of uncompetitive, because people just want to buy stocks and bonds and no one else enjoys that. So we can run these huge deficits and sort of not worry about it.

2. Remembering Daniel Kahneman: A Mosaic of Memories and Lessons – Evan Nesterak and many others

To be continued …

By Richard Thaler, Professor of Behavioral Science and Economics, University of Chicago

My fondest memories of working with Danny come from 1984 to ’85 when I spent a year visiting him in Vancouver at The University of British Columbia. Danny had just begun a new project with Jack Knetsch on what people think is fair in market transactions and they invited me to join them. We had the then-rare ability to ask survey questions to a few hundred randomly selected Canadians each week. We would draft three versions of five questions, fax them to Ottawa Monday morning, get the results faxed back to us Thursday afternoon. Who needs Mturk! We then spent the weekend digesting the results and writing new questions.

We learned that raising the price of snow shovels the morning after a blizzard might make sense to an economist, but would make customers angry. Danny displayed two of his most prominent traits. He was always a skeptic, even (especially?) about his own ideas, so we stress-tested everything. And he was infinitely patient in that pursuit. Was our finding just true for snow shovels? What about water after a hurricane? Flu medicine? How about late-season discounts (which of course are fine). It was total immersion; meeting in person several times a week and talking constantly. We were in the zone.

Although we spent another year together in New York seven years later, we were unable to recreate that intensity. We had too many other balls in the air. But we continued our conversations and friendship until the end. Every conversation ended the same way: “To be continued.”…

...I’m more like a spiral than a circle

By Dan Lovallo, Professor of Strategy, Innovation and Decision Sciences, University of Sydney

Many people have heard that Danny changes his mind—a lot. This is certainly true. I have never written even a 5,000-word essay with him that didn’t take a year. Let me add another dimension to the discussion. During our last working dinner at a bistro in New York, and possibly out of mild frustration, I said, “Danny, you know you change your mind a lot.” It wasn’t a question. He continued chewing. I continued my line of non-question questioning: “And often you change it back to what it was at the beginning.”

Danny, having finished his bite and without missing a beat, looked up and in his characteristic lilt said, “Dan, that’s when I learn the most.” Then using his finger he drew a circle in space. “I don’t go around and around a problem. It might seem like it, but I am getting deeper and deeper.” The circle morphed into a three-dimensional spiral. “So, you’re missing all the learning,” he explained, as he displayed the invisible sculpture. “I’m more like a spiral than a circle.” Happy with this new idea, Danny grinned as only Danny could…

A case in character

By Angela Duckworth, Professor of Psychology, University of Pennsylvania

One evening, more than twenty years ago, I was the last one in the lab when the phone rang. “Hello?” I said, I hope not brusquely. I was a Ph.D. student at the time and eager to get back to my work. “Hello?” came the reply of an uncommonly polite older gentleman, whose accent I couldn’t quite place. “I’m so sorry to trouble you,” he continued. “I believe I’ve just now left my suitcase there.” Ah, this made sense. We’d hosted an academic conference that day. “It’s a terrible inconvenience, I know, but might you keep it somewhere until I can return to pick it up?” “Sure,” I said, cradling the receiver and grabbing a notepad. “How do you spell your name?” “Thank you so very much. It’s K-A-H-N-E-M-A-N.” I just about fainted. “Yes, Dr. Kahneman,” I said, coming to my senses, likely more deferentially than when I’d first picked up.

When I hung up, I thought to myself, Oh, it’s possible to be a world-famous genius—the most recently anointed Nobel laureate in economics, among other honors—and interact with anybody and everybody with utmost respect and dignity, no matter who they are. In the years that followed, I got to know Danny Kahneman much better, and when I did, that view was only confirmed. Confirmation bias? Halo effect? No and no. What then? Character. The world is mourning the loss of Danny Kahneman the genius, as we should, but I am missing Danny Kahneman the person…

Anxious and unsure

By Eric Johnson, Professor of Business, Columbia University

A few months before the publication of Thinking, Fast and Slow in 2011, the Center for Decision Sciences had scheduled Danny to present in our seminar series. We were excited because he had decided to present his first “book talk” with us. Expecting a healthy crowd, we scheduled the talk in Uris 301, the biggest classroom in Columbia Business School.

I arrived in the room a half hour early to find Danny, sitting alone in the large room, obsessing over his laptop. He confided that he had just changed two-thirds of the slides for the talk and was quite anxious and unsure about how to present the material. Of course, after the introduction, Danny presented in his usual charming, erudite style, communicating the distinction between System 1 and System 2 with clarity to an engaged audience. Afterwards, I asked him how he thought it went, and he said, “It was awful, but at least now I know how to make it better.” Needless to say, the book went on to become an international bestseller.

This was not false modesty. Having studied overconfidence throughout his career, Danny seemed immune to its effects. While surely maddening to some coauthors, this resulted in work that was more insightful and, most importantly to Danny and to us, correct. He was not always right, but always responsive to evidence, supportive or contradictory. For example, when some of the evidence cited in the book was questioned as a result of the replication crisis in psychology, Danny revised his opinion, writing in the comments of a critical blog: “I placed too much faith in underpowered studies.”

The best tribute to Danny, I believe, is adopting this idea, that science and particularly the social sciences, is not about seeming right, but instead, being truthful…

Practical problem solving

By Todd Rogers, Professor of Public Policy, Harvard University

I was part of a group helping some political candidates think about how to respond to untrue attacks by their political rivals. We focused on what cognitive and social psychology said about persuasive messaging. Danny suggested a different emphasis I hadn’t considered.

He directed us to a literature in cognitive psychology on cognitive associations. Once established, associations cannot simply be severed; attempting to directly refute them often reinforces them, and logical arguments alone can’t undo them. But these associations can be weakened when other competing associations are created.

For instance, if falsely accused of enjoying watching baseball, I’d be better off highlighting genuine interests—like my enjoyment of watching American football or reality TV—to dilute the false association with baseball. This anecdote is one small example of the many ways Danny’s profound intellect has influenced practical problem-solving. He’ll be missed and remembered.

Premortems

By Michael Mauboussin, Head of Consilient Research, Morgan Stanley

The opportunity to spend time with Danny and the chance to interview him were professional delights. One of my favorite lessons was about premortems, a technique developed by Gary Klein that Danny called one of his favorite debiasing techniques. In a premortem, a group assumes that they have made a decision (which they have yet to do), places themselves in the future (generally a year from now), and pretends that it worked out poorly. Each member independently writes down the reasons for the failure.

Klein suggested that one of the keys to premortems was the idea of prospective hindsight, that putting yourself into the future and thinking about the present opens up the mind to unconsidered yet relevant potential outcomes. I then learned that the findings of the research on prospective hindsight had failed to replicate—which made me question the value of the technique.

Danny explained that my concern was misplaced and that prospective hindsight was not central to the premortem. Rather, it was that the technique legitimizes dissent and allows organizations the opportunities to consider and close potential loopholes in their plans. That I had missed the real power of the premortem was a revelation and a relief, providing me with a cherished lesson…

Eradicating unhappiness

By George Loewenstein, Professor of Economics and Psychology, Carnegie Mellon University

For Danny, research was intensely personal. He got into intellectual disputes with a wide range of people, and these would hurt him viscerally, in part because it pained him that people he respected could come to different conclusions from those he held so strongly. He came up with, or at least embraced, the concept of “adversarial collaboration” in which researchers who disagreed on key issues would, however, agree upon a definitive test to determine where reality lay. A few of these were successful, but others (I would say most) ended with both parties unmoved, perhaps reflecting Robert Abelson’s insight that “beliefs are like possessions,” and, hence subject to the endowment effect.

I was spending time with Danny when he first got interested in hedonics—happiness—and that was a personal matter as well. His mother was declining mentally in France, and he agonized about whether to visit her; the issue was that she had anterograde amnesia, so he knew that she would forget his visit as soon as it ended. The criterion for quality of life, he had decided, should be the integral of happiness over time; so that—although she would miss out on the pleasure of remembering it—his visit would have value if she enjoyed it while it was happening.

Showing the flexibility of his thinking, and his all-too-rare willingness to learn from the data, his perspective changed as he studied happiness. He became more concerned about the story a life tells, including, notably, its peak and end; he concluded that eradicating unhappiness was a more important goal than fostering happiness, and began to draw a sharp distinction between happiness and life satisfaction, perhaps drawing, again, on his own experience. He always seemed to me to be extremely high in life satisfaction, but considerably less so in happiness.

3. Paradox of China’s stock market and economic growth – Glenn Luk

Joe Weisenthal of Bloomberg and the Odd Lots posed this question on Twitter/X:

“Given that the stock market hasn’t been especially rewarding to the volume-over-profits strategy undertaken by big Chinese manufacturers, what policy levers does Beijing have to sustain and encourage the existing approach?”

Many people may have noticed that despite the impressive growth of Chinese manufacturers in sectors like electric vehicles, the market capitalizations of these companies are dwarfed by Tesla. This seeming paradox lies at the heart of the the question posed by Joe.

In 2020, I shared an observation that China cares a lot more about GDP than market capitalization. I was making this observation in the context of Alibaba1 but would soon broaden the observation to encapsulate many more situations. In sharp contrast to Americans, Beijing just does not seem to care that much about equity market valuations but do seem to very much care about domestic growth and economic development…

…With respect to private sector market forces, Chinese policymakers tend to see its role as coordinators of an elaborate “game” that is meant to create an industry dynamic that drives desired market behaviors. The metaphor I sometimes use is as the Dungeon Master role in Dungeons & Dragons.

These “desired market behaviors” tend to overwhelmingly revolve around this multi-decade effort to maximize economic development and growth. Beijing has been very consistent about the goal to become “fully developed” by the middle of the 21st century.

To date, I would say that Chinese policymakers have been relatively successful using the approaches and principles described above to drive economic growth:

  • Priority on labor over capital / wage growth over capital income growth. Prioritizing labor is a key pillar of China’s demand-side support strategy. Growth in household income drives growth in domestic demand (whether in the form of household gross capital formation or expenditures).
  • Setting up rules to foster the create competitive industry dynamics and motivate economic actors to reinvest earnings back into growth.
  • Periodic crackdowns to disrupt what is perceived to be rent-seeking behavior, particularly from private sector players that have accumulated large amounts of equity capital (vs. small family businesses):
    • Anti-competitive behavior (e.g. Alibaba e-commerce dominance in the late 2010s)
    • Regulatory arbitrage (moral hazards inherent in Ant Financial’s risk-sharing arrangement with SOE banks)
  • Societal effects (for-profit education driving “standing on tiptoes” approach to childhood education)
  • Supply-side support to encourage dynamic, entrepreneurial participation from private sector players like in the clean energy transition to drive rapid industry through scale and scale-related production efficiencies. China has relied on supply-side strategies to support economic for decades despite repeated exhortations by outsiders to implement OECD-style income transfers.
  • Encouraging industry consolidation (vs. long drawn-out bankruptcies) once sectors have reached maturity although there are often conflicting motivations between Beijing and local governments.

A consistent theme is Beijing’s paranoia to rent-seeking behavior by capitalists (especially those who have accumulated large amounts of capital). It is sensitive to the potential stakeholder misalignment when capitalists — who are primarily aligned with one stakeholder class (fiduciary duty to equity owners).

It would prefer that rent-seeking behavior be handled by the party instead, whose objective (at least in theory) is to distribute these rents back to “The People” — although naturally in practice it never turns out this way; Yuen Yuen Ang has written multiple volumes about the prevalence of Chinese-style corruption and its corrosive economic effects.

So to bring it back to Joe’s question, the answer on whether Chinese policymakers can continue these policies going forward very much revolves around this question of rent-seeking: is it better to be done by the government or by private sector capitalists? What should be abundantly clear is that Beijing is definitive on this question: the party will maintain a monopoly on rent-seeking.

4. What Surging AI Demand Means for Electricity Markets – Tracy Alloway, Joe Weisenthal, and Brian Janous

Brian (09:58):

Yeah, and you’re right, I mean it’s not like we didn’t know that Microsoft had a partnership with OpenAI and that AI was going to consume energy. I think everyone though was a bit surprised at just how quickly what ChatGPT could do just captured the collective consciousness.

You probably remember when that was released. I mean it really sort surprised everyone and it became this thing where suddenly, even though we sort of knew what we were working on, it wasn’t until you put it out into the world that you realize maybe what you’ve created. That’s where we realized we are running up this curve of capability a lot faster than we thought. A number of applications that are getting built on this and the number of different ways that it’s being used and how it’s just become sort of common parlance. I mean, everyone knows what Chat GPT-3 is, and no one knew what it was the month before that.

So there was a bit, I think of a surprise in terms of just how quickly it was going to capture the collective consciousness and then obviously lead to everything that’s being created as a result. And so we just moved up that curve so quickly and I think that’s where the industry maybe got, certainly the utilities were behind because as you may have seen there, a lot of them are starting to restate their low-growth expectations.

And that was something that was not happening right before that. And so we’ve had massive changes just in the last two years of how utilities are starting to forecast what forecast. So if you take a look at a utility like Dominion in Virginia, so that’s the largest concentration of data centers in the United States. So they’re pretty good representative of what’s happening. If you go back to 2021, they were forecasting load growth over a period of 15 years of just a few percent.

I mean it was single-digit growth over that entire period. So not yearly growth, but over 15 years, single-digit growth. By 2023, they were forecasting to grow 2X over 15 years. Now keep in mind this is an electric utility. They do 10-year planning cycles. So because they have very long lead times for equipment for getting rights of away for transmission lines, they aren’t companies that easily respond to a 2X order of magnitude growth changed over a period of 15 years.

I mean, that is a massive change for electric utility, particularly given the fact that the growth rate over the last 15 to 20 years has been close to zero. So there’s been relatively no load growth in 15 to 20 years. Now suddenly you have utilities having to pivot to doubling the size of their system in that same horizon.

Tracy (13:10):

I want to ask a very basic question, but I think it will probably inform the rest of this conversation, but when we say that AI consumes a lot of energy, where is that consumption actually coming from? And Joe touched on this in the intro, but is it the sheer scale of users on these platforms? Is it, I imagine the training that you need in order to develop these models. and then does that energy usage differ in any way from more traditional technologies?

Brian (13:43):

Yeah, so whenever I think about the consumption of electricity for AI or really any other application, I think you have to start at sort of the core of what we’re talking about, which is really the human capacity for data, like whether it’s AI or cloud, humans have a massive capacity to consume data.

And if you think about where we are in this curve, I mean we’re on some form of S-curve of human data consumption, which then directly ties to data centers, devices, energy consumption ultimately, because what we’re doing is we’re turning energy into data. We take electrons, we convert them to light, we move them around to your TV screens and your phones and your laptops, etc. So that’s the uber trend that we’re riding up right now. And so we’re climbing this S-curve. I don’t know that anyone has a good sense of how steep or how long this curve will go.

If you go back to look at something like electricity, it was roughly about a hundred year. S-curve started in the beginning of last century. And it really started to flat line, as I mentioned before, towards the beginning of this century. Now we have this new trajectory that we’re entering, this new S-curve that we’re entering that’s going to change that narrative. But that S-curve for electricity took about a hundred years.

No one knows where we are on that data curve today. So when you inject something like AI, you create a whole new opportunity for humans to consume data, to do new things with data that we couldn’t do before. And so you accelerate us up this curve. So we were sitting somewhere along this curve, AI comes along and now we’re just moving up even further. And of course that means more energy consumption because the energy intensity of running an AI query versus a traditional search is much higher.

Now, what you can do with AI obviously is also much greater than what you can do with a traditional search. So there is a positive return on that invested energy. Oftentimes when this conversation comes up, there’s a lot of consternation and panic over ‘Well, what are we going to do? We’re going to run out of energy.’

The nice thing about electricity is we can always make more. We’re never going to run out of electricity. Not to say that there’s not times where the grid is under constraint and you have risks of brownouts and blackouts. That’s the reality. But we can invest more in transmission lines, we can invest more in power plants and we can create enough electricity to match that demand.

Joe (16:26):

Just to sort of clarify a point and adding on to Tracy’s question, you mentioned that doing an AI query is more energy intensive than, say, if I had just done a Google search or if I had done a Bing search or something like that. What is it about the process of delivering these capabilities that makes it more computationally intensive or energy intensive than the previous generation of data usage or data querying online?

Brian (16:57):

There’s two aspects to it, and I think we sort of alluded to it earlier, but the first is the training. So the first is the building of the large language model. That itself is very energy intensive. These are extraordinarily large machines, collections of machines that use very dense chips to create these language models that ultimately then get queried when you do an inference.

So then you go to ChatGPT and you ask it to give you a menu for a dinner party you want to have this weekend, it’s then referencing that large language model and creating this response. And of course that process is more computationally intensive because it’s doing a lot more things than a traditional search does. A traditional search just matched the words you put into a database of knowledge that it had put together, but these large language models are much more complex and then therefore the things you’re asking it to do is more complex.

So it will almost by definition be a more energy intensive process. Now, that’s not to say that it can’t get more efficient and it will, and Nvidia just last week was releasing some data on some of its next generation chips that are going to be significantly more efficient than the prior generation.

But one of the things that we need to be careful of is to think that because something becomes more efficient, then therefore we’re going to use less of the input resource. In this case, electricity. That’s not how it works, because going back to the concept of human capacity for consuming data, all we do is we find more things to compute. And this is, you’ve probably heard of Jevons paradox, and this is the idea that, well, if we make more efficient steam engines, he was an economist in the 1800s and he said ‘Well, if make more efficient steam engines, then we’ll use less coal.’

And he is like ‘No, that’s not what’s going to happen. We’re going to use more coal because we’re going to mechanize more things.’ And that’s exactly what we do with data just because we’ve had Moore’s Law for years, and so chips has become incredibly more efficient than they were decades ago, but we didn’t use less energy. We used much more energy because we could put chips in everything.

So that’s the trend line that we’re on. It’s still climbing that curve of consumption. And so no amount of efficiency is going to take us at this point, at least because I don’t believe we’re anywhere close to the bend in that S-curve. No amount of efficiency is going to take us off of continuing to consume more electricity, at least in the near term…

…Brian (22:35):

Well, this is where it gets a little concerning is that you have these tech companies that have these really ambitious commitments to being carbon neutral, carbon negative, having a hundred percent zero carbon energy a hundred percent of the time, and you have to give them credit for the work they’ve done.

I mean, that industry has done amazing work over the last decade to build absolutely just gigawatts upon gigawatts of new renewable energy projects in the United States all over the world. They’ve been some of the biggest drivers in the corporate focus on decarbonization. And so you really have to give that industry credit for all it’s done and all the big tech companies have done some amazing work there.

The challenge though that we have is the environment that they did that in was that no growth environment we were talking about. They were all growing, but they were starting from a relatively small denominator 10 or 15 years ago. And so there was a lot of overhang in the utility system at that time because the utilities had overbuilt ahead of that sort of flatlining. So there was excess capacity on the system.

They were growing inside of a system that wasn’t itself growing on a net basis. So everything they did, every new wind project you brought on, every new solar project you bought on, those were all incrementally reducing the amount of carbon in the system. It was all net positive.

Now we get into this new world where their growth rates are exceeding what the utilities had ever imagined in terms of the absolute impact on the system. The utilities’ response is ‘The only thing we can do in the time horizon that we have is basically build more gas plants or keep online gas plants or coal plants that we were planning on shuttering.’

And so now that the commitments that they have to zero carbon energy to be carbon negative, etc., are coming into contrast with the response that the utilities are laying out in their what’s called integrated resource plans or IRPs.

And we’ve seen this recently just last week in Georgia. We’ve seen it in Duke and North Carolina, Dominion and Virginia. Every single one of those utilities is saying ‘With all the demand that we’re seeing coming into our system, we have to put more fossil fuel resources on the grid. It’s the only way that we can manage it in a time horizon we have.’ Now, there’s a lot of debate about whether that is true, but it is what’s happening…

…Brian (30:29):

That’s right. And that’s the big challenge that good planners have today is what loads do you say yes to and what are the long-term implications of that? And we’ve seen this play out over the rest of the globe where you’ve had these concentrations of data centers. This is a story that we saw in Dublin, we’ve seen it in Singapore, we’ve seen it in Amsterdam.

And these governments start to get really worried of ‘Wait a minute, we have too many data centers as a percentage of overall energy consumption.’ And what inevitably happens is a move towards putting either moratoriums on data center build out or putting very tight restrictions on what they can do and the scale at which they can do it. And so we haven’t yet seen that to any material degree in the United States, but I do think that’s a real risk and it’s a risk that the data center industry faces.

I think somewhat uniquely in that if you’re the governor of a state and you have a choice to give power to a say new EV car factory that’s going to produce 1,500, 2,000 jobs versus a data center that’s going to produce significantly less than that, you’re going to give it to the factory. The data centers are actually the ones that are going to face likely the most constraints as governments, utilities, regulators start wrestling with this trade-off of ‘Ooh, we’re going to have to say no to somebody.’…

…Tracy (36:36):

What are the levers specifically on the tech company or the data center side? Because again, so much of the focus of this conversation is on what can the utilities do, what can we do in terms of enhancing the grid managing supply more efficiently? But are there novel or interesting things that the data centers themselves can do here in terms of managing their own energy usage?

Brian (37:02):

Yes. There’s a few things. I mean, one is data centers have substantial ability to be more flexible in terms of the power that they’re taking from the grid at any given time. As I mentioned before, every data center or nearly every data center has some form of backup generation. They have some form of energy storage built into this.

So the way a data center is designed, it’s designed like a power plant with an energy storage plant that just happens to be sitting next to a room full of servers. And so when you break it down into those components, you say, okay, well how can we better optimize this power plant to be more of a grid resource? How can we optimize the storage plant to be more of a grid resource? And then in terms of even the servers themselves, how can we optimize the way the software actually operates and is architected to be more of a grid resource?

And that sort of thinking is what is being forced on the industry. Frankly, we’ve always had this capability. I mean, we were doing, I mean we did a project like 2016 with a utility where we put in flexible gas generators behind our meter because the utility was going to have to build a new power plant if we didn’t have a way to be more flexible.

So we’ve always known that we can do this, but the industry has never been pressurized to really think innovatively about how can we utilize all these assets that we have inside of the data center plant itself to be more part of the grid. So I think the most important thing is really thinking about how data centers become more flexible. There’s a whole ‘nother line of thinking, which is this idea of, well, utilities aren’t going to move fast enough, so data centers just need to build all their own power plants.

And this is where you start hearing about nuclear and SMRs and infusion, which is interesting, except it doesn’t solve the problem this decade. It doesn’t solve the problem that we’re facing right now because none of that stuff is actually ready for prime time. We don’t have an SMR that we can build today predictably on time, on budget.

So we are dependent on the tools that we have today, which are things like batteries, grid enhancing technologies, flexible load, reconductoring transmission lines to get more power over existing rights of ways. So there’s a number of things we can do with technologies we have today that are going to be very meaningful this decade and we should keep investing in things that are going to be really meaningful next decade. I’m very bullish on what we can do with new forms of nuclear technology. They’re just not relevant in the time horizon. The problem we’re talking about [now].

Joe (39:52):

At some point, we’re going to do an Odd Lots episode specifically on the promise of small modular reactors and why we still don’t have them despite the seeming benefits. But do you have a sort of succinct answer for why this sort of seeming solution of manufacturing them faster, etc., has not translated into anything in production?

Brian (40:14)

Well, quite simply, we just forgot how to do it. We used to be able to build nuclear in this country. We did it in the seventies, we did it in the eighties, but every person that was involved in any one of those projects is either not alive or certainly not still a project manager at a company that would be building nuclear plants, right?

I think we underestimate human capacity to forget things. Just because we’ve done something in the past doesn’t mean that we necessarily can do it. Again, we have to relearn these things, and as a country, we do not have a supply chain. We don’t have a labor force. We don’t have people that manage construction projects that know how to do any of these things.

And so when you look at what South Korea is doing, you look at what China’s doing, they’re building nuclear plants with regularity. They’re doing it at a very attractive cost. They’re doing it on a predictable time horizon, but they have actually built all of those resources that we just simply don’t have in this country that we need and we need to rebuild that capability. It just doesn’t exist today…

…Brian (41:50):

Absolutely. And so if you go back to the era that we’ve been in of relative no load growth, if you’re a utility regulator and utility comes and asks you for a billion dollars for new investment and you’re used to saying ‘no,’ you’re used to saying ‘Well, wait a minute. Why do you need this? What is this for? How is this going to help manage again, reliability, cost, predictability, etc.?’

Now you’re in this whole new world and going back to this concept of we easily forget things — no one who’s a regulator today or the head of utility today has ever lived through an environment where we’ve had this massive expansion of the demand for electricity. So everyone now, including the regulators are having to relearn, okay, how do we enable utility investment in a growth environment? It’s not something they’ve ever done before. And so they’re having to figure out, okay, how do we create the bandwidth for utilities to make these investments?

Because one of the fundamental challenges that utilities have is that they struggle to invest if there’s no customer sitting there asking for the request, so they can’t sort of invest. I mean, if I’m Nvidia and I’m thinking about the world five years from now and think ‘Wow, how many chips do I want to sell in 2030?’ I can go out and build a new factory. I can go out and invest capital and I can go do all, I mean, I don’t need to have an order from a Microsoft or an Amazon or a Meta to go do that. I can build speculatively.

Utilities can’t really do that. They’re basically waiting for the customer to come ask for it. But when you have all this demand show up at the same time, well, what happens? The lead time start to extend. And so instead of saying ‘Yeah, I’ll give you that power in a year or two years,’ it’s now like, ‘Well, I’ll give it to you in five to seven years.’ And so that’s an unsustainable way to run the electric utility grid. So we do need regulators to adapt and evolve to this new era of growth.

5. Reflections from the heart of Japan’s ancient cedar forest – Thomas Chua

Yakushima was particularly memorable, an island near Kagoshima famous for its wildlife and ancient cedar forests. These majestic cedars, some of the oldest trees in the world, grow steadily through centuries, unaffected by the transient storms and seasonal fluctuations.

This is Sennensugi, which means a thousand-year-old cedar tree even though it’s still young. Yakushima’s oldest tree (and the oldest tree in Japan) is Jōmon Sugi, which is estimated to be between 2,170 and 7,200 years old.

This resonates deeply with my investment strategy. Just as these enduring cedars are not swayed by the fleeting changes in their environment, I focus on “Steady Compounders”—companies with significant economic moats and consistent intrinsic value growth.

When friends learn about my extensive travels, they often ask, “What about your investments? Don’t you need to monitor them constantly?” What they usually mean about ”monitoring” isn’t analyzing quarterly business results, but rather obsessively tracking stock prices and consuming every tidbit of news to stay perpetually informed.

However, I liken such constant vigilance to setting up a camera in a forest to watch the trees grow, this approach isn’t just tedious—it’s unnecessary and potentially harmful, often prompting rash decisions.

Everyone invests to grow wealth, but understanding why you invest is crucial. For me, it serves to enrich my curiosity and intellect, rewards my eagerness to learn, and more importantly, grants me the freedom to live life on my terms and cherish moments with my loved ones.

Therefore, I don’t pursue obscure, unproven companies which require intensive monitoring. Instead, I look for Steady Compounders — firms with a significant economic moat that are growing their intrinsic value steadily.

Like the steady growth of Yakushima’s cedars, these firms don’t need constant oversight; they thrive over long periods through economic cycles, much as the cedars endure through seasonal changes. Investing in such companies gives me the freedom to explore the world, knowing my investments are growing steadily, mirroring the quiet, powerful ascent of those ancient trees.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com