What We’re Reading (Week Ending 09 June 2024)

What We’re Reading (Week Ending 09 June 2024) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 09 June 2024):

1. Google CEO Sundar Pichai on AI-powered search and the future of the web – Nilay Patel and Sundar Pichai

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more…

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing…

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

2. Five Moat Myths (transcript here)- Robert Vinall

So we’re now on to Moat Myth number three, which is execution doesn’t matter. So there’s this idea that, like I mentioned the quote earlier, on “when a management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact.” So this is a bit of a callback to my presentation on management and it implies that as long as the moat is there, nothing can go wrong and vice versa – if the mode isn’t there, then nothing is basically going to go right. I really strongly disagree with that. Some of the best businesses, some of the best investments I’ve seen, in the companies which have really great execution and that execution tends over time to lead to a moat. So I think people get it backwards a little bit. It’s not that the moat trumps execution, it’s that the moat is the output of execution…

…So this one won’t be a surprise to you. I kind of talked about it in the summary on the management presentation but there’s this idea that management doesn’t matter. And I have two examples. So one is a crook and this is the easiest argument to make. Anyone who says management doesn’t matter, all that counts is the business and the financials, well clearly a crook can destroy a business. There’s thousands of examples of that. One springs to mind is an Indian brewer, Kingfisher where the guy effectively sells the business and buys an airline with it, which goes bust. His family went from being very wealthy to zero. So clearly management can destroy a business. I don’t think that’s a hard argument to make.

But on the positive side, clearly management can also be the difference between a great business and a failing business. And of course the most famous example of that ever is Berkshire Hathaway, the company we’re all here to see tomorrow. As many of you will know, Berkshire Hathaway was a failing textile mill and would have almost certainly gone bankrupt and is today I think one of the top 10 largest companies in the US, if not in the world. And that’s thanks to the investment decisions and the investing acumen of Warren Buffett. So clearly management does matter.

3. Getting materials out of the lab – Benjamin Reinhardt

Inventing a new material is the beginning of a long process.

Take carbon fiber composites. You’re almost certainly familiar with these, particularly if you’ve ridden a surprisingly light bike or seen its distinctive crosshatched weave pattern on a car dashboard or phone case.

Looking at carbon fiber composites through an electron microscope, you observe strands of carbon atoms arranged in a hexagonal pattern, woven into mats and layered with a resin such as epoxy. Carbon fiber’s tensile strength (the amount of load it can bear under tension before it breaks) is similar to steel, but the material is much less dense. So if you care about both weight and strength – as you do when you’re designing vehicles from a supercar to a Boeing 787 – carbon fiber is the material for you.

Modern materials like these carbon fiber composites are born in laboratories. Researchers at universities or industrial research labs do test tube–scale experiments, which can produce mind-blowing results. Carbon fiber first showed great promise in 1960 when Richard Millington patented a process to create fibers made of 99 percent carbon.

However, at lab scale, materials don’t do anything. Most people wouldn’t want a rope that is a centimeter long, or a battery that lasts three minutes. Leaving the lab requires bridging many orders of magnitude: from producing less than 0.001 kilograms (one gram) per day in a lab to more than 1,000 kilograms (one tonne) per day in a factory.

You can think of lab-scale materials as the most artisanal products in the world, painstakingly handcrafted by people with advanced degrees. Like any artisanal product, lab-scale materials are expensive. Trying to mass-produce these materials by simply increasing the number of fume hoods, test tubes, and pipette wielders would make them cost billions of dollars per kilogram. After a material is invented, we need to discover cheaper ways to produce it, since price per quantity has a dramatic effect on how much it can be used.

We call this process ‘scaling’, but to me that word is frustratingly vague. It bundles together many different problems that need to be solved to decrease cost and increase yield. The three key ones are:

Consistency. A lab can declare success if a small fraction of their material has an impressive property, but a factory needs that fraction to be much higher. A more consistent yield means less waste, and a lower price.

Standardization. Figuring out how to produce a material using conventional, industry-standard equipment avoids the cost of custom tools and enables you to make more material in an easily replicable way.

Streamlining. Moving a product through a continuous manufacturing process, as opposed to applying each of the manufacturing steps to a small, static batch drastically reduces costs. Henry Ford did this with his moving assembly line, passing cars from worker to worker rather than moving workers from car to car…

…Building an industrial-scale factory requires money – a lot of it. To justify the expense to investors, you need to answer the questions, ‘What is your material good for?’, and more importantly, ‘Who will buy it?’

The answer is far from obvious, even for great materials: carbon fiber went through a decades-long journey before it became the star it is today. At first, manufacturers sold it as low-margin home insulation material because of its low thermal conductivity. It was key to several failed products, from turbine blades to a replacement for fiberglass. It eventually found its first iconic use case when Gay Brewer won the first annual Taiheiyo Club Masters using a golf club with a carbon fiber shaft.

The search for a cost-effective use case leaves many new materials in a chicken-and-egg situation: entrepreneurs and companies can’t justify the expense of scaling because there isn’t an obviously valuable application – but that application can’t emerge without a cost-effective material that can be experimented with.

Even applications that do seem obvious can take a long time to realize. In 1968, Rolls-Royce attempted to use carbon fiber in airplane propellers, which failed spectacularly. The propellers were extremely vulnerable to impacts – the whole project became such a boondoggle that it was a significant factor in the company’s collapse into receivership in 1971. Another 40 years would pass before the first majority–carbon fiber airplane, the Boeing 787, took flight…

…Scientists, mostly working in universities, have strong incentives to focus on novelty and one-off demonstrations because these can lead to publications and positive media attention. That work can be valuable, but the search for novelty alone creates mismatches with efforts to produce useful materials at scale. Essentially, the system of discovery sets up scaling for failure by not creating materials without any consideration of their ability to scale.

The drive to focus on new discoveries over improving old ones’ capacity to scale, combined with the difficulty of mimicking real-world conditions in a lab, creates initial experiments that bear little resemblance to how people use a material in the real world.

Take the development of lithium-ion battery anodes. Researchers can demonstrate exciting leaps in power density from a new anode material using a half-cell reaction that provides functionally infinite lithium. But in a real battery with finite lithium, these anodes would reduce battery lifetimes to the point of unusability.

Similarly, carbon nanotubes have incredible tensile strength for their weight, but it’s hard to make them longer than a few centimeters. This length limit comes from carbon nanotubes’ tendency to tangle and become more susceptible to impurities as they get longer. Cable makers in the real world don’t just care about strength-to-weight ratios, but also the length over which the material maintains that strength. Yet scientists can take their headline of ‘superstrong carbon nanotubes’ and move on to the next project…

…Materials start-ups often struggle to raise venture capital financing. Venture isn’t a good fit for the capital costs and timescales of the material industry: the size, scale, and expectations of venture capital funds are well-suited to invest in software and pharmaceuticals whose revenues can skyrocket once they hit the market. Venture capital also prefers high-margin businesses that can get to market quickly, but materials often face a trade-off between margins and speed: while it’s faster and cheaper to innovate on one component of a larger production line or one material in an existing product, most of the margins come from new products…

…The long road from the lab to the material world might make the future of new materials seem bleak.

One reason for optimism is that new materials might already be on the horizon. There is a shockingly consistent timescale for materials to become useful beyond their initial niches. It took roughly 50 years between Roger Bacon’s discovery in 1958 and the flight of the first majority–carbon fiber airplane in 2009. The first lithium-ion battery was created by NASA in 1965, but most people didn’t start interacting with them until the mid 2000s. The properties of pure carbon nanotubes weren’t isolated until 1991. If there is indeed a 40- to 50-year timescale for lab-based materials to be useful in high-impact applications, we don’t need to despair about a carbon nanotube space elevator being overdue until somewhere around 2040.

4. High-Yield Was Oxy. Private Credit Is Fentanyl – Greg Obenshain and Daniel Rasmussen

Private equity assets have increased sevenfold since 2002, with annual deal activity now averaging well over $500 billion per year. The average leveraged buyout is 65 percent debt-financed, creating a massive increase in demand for corporate debt financing.

Yet just as private equity fueled a massive increase in demand for corporate debt, banks sharply limited their exposure to the riskier parts of the corporate credit market. Not only had the banks found this type of lending to be unprofitable, but government regulators were warning that it posed a systemic risk to the economy.

The rise of private equity and limits to bank lending created a gaping hole in the market. Private credit funds have stepped in to fill the gap. This hot asset class grew from $37 billion in dry powder in 2004 to $109 billion in 2010, then to a whopping $261 billion in 2019, according to data from Preqin. There are currently 436 private credit funds raising money, up from 261 only five years ago. The majority of this capital is allocated to private credit funds specializing in direct lending and mezzanine debt, which focus almost exclusively on lending to private equity buyouts.

Institutional investors love this new asset class. In an era when investment-grade corporate bonds yield just over 3 percent — well below most institutions’ target rate of return — private credit funds are offering targeted high-single-digit to low-double-digit net returns. And not only are the current yields much higher, but the loans are going to fund private equity deals, which are the apple of investors’ eyes…

…Banks and government regulators have expressed concerns that this type of lending is a bad idea. Banks found the delinquency rates and deterioration in credit quality, especially of sub-investment-grade corporate debt, to have been unexpectedly high in both the 2000 and 2008 recessions and have reduced their share of corporate lending from about 40 percent in the 1990s to about 20 percent today. Regulators, too, learned from this experience, and have warned lenders that a leverage level in excess of 6x debt/EBITDA “raises concerns for most industries” and should be avoided. According to Pitchbook data, the majority of private equity deals exceed this dangerous threshold…

…Empirical research into lending markets has typically found that, beyond a certain point, higher-yielding loans tend not to lead to higher returns — in fact, the further lenders step out on the risk spectrum, the less they make as losses increase more than yields…

…The historical experience does not make a compelling case for private credit. Public business development companies are the original direct lenders, specializing in mezzanine and middle-market lending. BDCs are Securities and Exchange Commission–regulated and publicly traded companies that provide retail investors access to private market platforms. Many of the largest private credit firms have public BDCs that directly fund their lending. BDCs have offered 8 to 11 percent yield, or more, on their vehicles since 2004 — yet returned an average of 6.2 percent, according to the S&P BDC index. BDCs underperformed high-yield over the same 15 years, with significant drawdowns that came at the worst possible times..

…Central to every private credit marketing pitch is the idea that these high-yield loans have historically experienced about 30 percent fewer defaults than high-yield bonds, specifically highlighting the seemingly strong performance during the financial crisis…

…But Cambridge Associates has raised some pointed questions about whether default rates are really lower for private credit funds. The firm points out that comparing default rates on private credit to those on high-yield bonds isn’t an apples-to-apples comparison. A large percentage of private credit loans are renegotiated before maturity, meaning that private credit firms that advertise lower default rates are obfuscating the true risks of the asset class — material renegotiations that essentially “extend and pretend” loans that would otherwise default. Including these material renegotiations, private credit default rates look virtually identical to publicly rated single-B issuers…

… If this analysis is correct and private credit deals perform roughly in line with single-B-rated debt, then historical experience would suggest significant loss ratios in the next recession. According to Moody’s Investors Service, about 30 percent of B-rated issuers default in a typical recession (versus fewer than 5 percent of investment-grade issuers and only 12 percent of BB-rated issuers)…

…Private equity firms discovered that private credit funds represented an understanding, permissive set of lenders willing to offer debt packages so large and on such terrible terms that no bank would keep them on its balance sheet. If high-yield bonds were the OxyContin of private equity’s debt binge, private credit is its fentanyl. Rising deal prices, dividend recaps, and roll-up strategies are all bad behaviors fueled by private credit…

…Lender protections have been getting progressively weaker. After analyzing just how weak these covenants have become since the financial crisis, Moody’s recently adjusted its estimate of average recovery in the event of default from the historical average of 77 cents on the dollar to 61 cents…

…Today private equity deals represent the riskiest and worst-quality loans in the market. Banks and regulators are growing increasingly worried. Yet massive investor interest in private credit has sent yields on this type of loan lower, rather than higher, as the deteriorating quality might predict. As yields have fallen, direct lenders have cooked up leveraged structures to bring their funds back to the magical return targets that investors demand. Currently, we suspect that a significant number of private equity deals are so leveraged that they can’t pay interest out of cash flow without increasing borrowing. Yet defaults have been limited because private credit funds are so desperate to deploy capital (and not acknowledge defaults). Massive inflows of capital have enabled private lenders to paper over problems with more debt and easier terms.

But that game can’t go on forever.

5. How Does the Stock Market Perform in an Election Year? – Nick Maggiulli

With the U.S. Presidential election set for a rematch in November, many investors are wondering how the U.S. stock market might perform in the months that follow. While predicting the future is never easy, using history as a guide can be useful for understanding how markets might react to a Biden or Trump victory…

…In the seven or so weeks following an election there can be lots of uncertainty around how the future might unfold. But, if we look at how markets actually perform after an election, they are typically pretty average. To start, let’s consider how U.S. stocks (i.e. the S&P 500) have performed from “election day” until the end of the year for each year since 1950. Note that when I say “election day” I mean from the Tuesday after the first Monday in November to year end, regardless of whether there was an actual election…

…while stock performance has varied quite a bit since 1950, U.S. stocks tend to rise slightly following an election (or in the same time period during a non-election year). The biggest exceptions to this were in 2008, when markets declined by nearly 11% from election day to year end, and in 1998, when they increased by almost 10% as the DotCom bubble continued to inflate.

However, if we look at the average performance in election years versus non-election years, all these differences wash out. Plotting the average performance of the 18 election years and 56 non-election years in the data, we see basically no long-term difference in performance:

While the S&P 500 tends to perform worse (on average) in the first few days following the election, there seems to be no lasting impact on stocks through year end. In fact, the average return following election day through December 31 is 2.3% in an Election Year compared to 2.4% in a Non-election Year. In other words, their returns on average are basically the same. The median (50th percentile) return is similar as well with a 2.9% return in an Election Year compared to 2.4% during a Non-election year…

…When Trump won the 2016 election to almost everyone’s surprise, many believed that U.S. stocks would crash as a result. Jane Street, a prominent quantitative trading firm, was one of them. After finding a way to get the 2016 election results minutes before the rest of the mainstream media, Jane Street still ended up losing money because they got the market’s reaction wrong. As Michael Lewis recalls in Going Infinite:

What had been a three-hundred-million-dollar profit for Jane Street was now a three-hundred-million-dollar loss. It went from single most profitable to single worst trade in Jane Street history.

This illustrates how difficult it can be to predict the reaction of markets, even for the smartest people in the room…

…Overall, U.S. stocks performed better than average after both Trump and Biden’s election victories. However, with the market increasing by 4% in 2016 and 7% in 2020, Biden is the clear post-election winner.

However, if we look at how U.S. stocks performed throughout the rest of their presidency, it seems like Trump will be the clear winner when all is said and done…

…One of the reasons I love this chart is because it illustrates that U.S. stocks tend to rise regardless of which political party is in office. This suggests that the factors that impact stock prices have less to do with who’s in office than we might initially believe.

Some of you will see the chart above and point out how the only two negative periods occurred when Republican presidents were in office. That is technically correct. However, it is also true that these negative periods occurred immediately after Democratic presidencies. So who’s to blame? The Republicans? The Democrats? Neither? No one knows…

…While the outcome of the 2024 U.S. Presidential election remains uncertain, history suggests that the stock market is likely to perform similarly regardless of who wins. In the short term, markets may react positively or negatively to the election results, but those effects tend to even out over time…

…Ultimately, the key to navigating the uncertainty of an election year is to stay informed and avoid making emotional decisions based on short-term political events. The U.S. economy and stock market have made it through countless political cycles before and will make it through this one as well. So no matter who wins in November, history suggests that staying the course is often the best course of action. 


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com