What We’re Reading (Week Ending 04 June 2023)

What We’re Reading (Week Ending 04 June 2023) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 04 June 2023):

1. Some Things We’ve Learned This Year – Ben Carlson

Tech stocks don’t need lower rates to go up. Tech stocks got crushed last year with the Nasdaq 100 falling more than 30%. The Fed raised interest rates from 0% to more than 4% so that didn’t help long-duration assets like growth stocks.

But there was this theory many people latched onto that tech stocks were only a rates play. In the 2010s and early-2020s rates were on the floor while tech stocks went bananas so it seemed apparent that there was an inverse relationship. When rates were lower tech stocks would do well and when rates were higher tech stocks would do poorly.

However, this year the Fed has now taken rates over 5% and could continue raising rates one, maybe two more times before all is said and done. Meanwhile, the Nasdaq 100 is up more than 30% in 2023.

Does this mean easy money had nothing to do with tech stock gains? I wouldn’t go that far. Low rates certainly helped long-duration assets. But low rates alone didn’t cause Apple to increase sales from $170 billion to nearly $400 billion in 10 years. Low rates have nothing to do with the AI speculation currently taking place with NVIDIA shares.

Interest rates are an important variable when it comes to the markets and economy. But rates alone don’t tell you the whole story when it comes to where people put their money. Tech stocks were also a fundamental play on innovations that have now become an integral part of all our lives…

Higher rates and inflation don’t guarantee poor stock market returns. There are a lot of market/econ people who think we could be in a new regime of higher rates and higher inflation. It’s a possibility worth considering. Many of those same people assume this will be a bad thing for markets. After all, the past 40+ years of financial market returns are all of a product of disinflation and falling rates, right? Right?

Not so fast. These are the average annual returns for the U.S. stock market over a 40 year period of rising inflation and interest rates:

  • 1940-1979: 10.3% per year

And these are the average annual returns for the U.S. stock market over a 40 year period of falling inflation and interest rates:

  • 1980-2019: 11.7% per year

The results are surprising. Things were better during the 1980-2019 period but not as much as one would think. I don’t know if we are entering a new regime of higher rates and inflation. But if we are it doesn’t necessarily mean the stock market is doomed.

2. Private Equity Fundamentals – Daniel Rasmussen and Chris Satterthwaite

But we can look at the subset of PE-owned companies that are either publicly listed or have issued public debt as a partial reflection of what’s currently going on in the opaque but important asset class. And we can use this data to understand what’s happening to revenue, EBITDA, and debt generally across private portfolios.

We took a look at all PE/VC-owned public companies, or companies with public debt, that were 30%+ sponsor-owned, had IPOed since 2018, had a recognizable sponsor as the largest holder, and were headquartered in North America. There were 350 companies that met this criteria; the public equities are worth a combined $385B, and we estimate the companies with public debt are worth another $360B of equity, comprising $750B or 6.5% of the total private equity AUM of $11.7T. Notably, the sample of public equities is roughly 40% tech, which is a significant industry bet, and consistent with our previous estimates of private equity industry exposure…

… We looked at both pro-forma EBITDA, which 50% of the companies in our sample reported, and at GAAP EBITDA. We see below that PE-backed companies in our sample had significantly lower EBITDA margins than the S&P 500, especially on a GAAP basis, and have seen significant margin compression over the past few years. GAAP EBITDA is, perhaps unsurprisingly, much lower than adjusted EBITDA.

Rising SG&A costs have left the median company barely EBITDA profitable on a GAAP basis. 55% of the PE-backed firms in our sample were free cash flow negative in 2022, and 67% added debt over the last 12 months…

…As a group, these companies have a median leverage of 4.9x, which is roughly the ratio of the average B-rated company. However, this includes many overcapitalized VC-backed companies, which are difficult to parse out from the private equity LBOs. When we look at only those with net debt, the median leverage increases to 8.8x, which would put the median LBO well into CCC credit rating (for context, the median leverage for the S&P 500 is 1.7x).

With interest rates rising over 500bps in 2022, much of the increase in interest rates is still not reflected in the 2022 reported figures. The cost of loans has soared recently: a $1B loan for a junk-rated company now averages 12%, up from around a 7.5% average in 2021, according to Reuters…

…The sample of companies we looked at is nearly unprofitable on an EBITDA basis, mostly cash flow negative, and extraordinarily leveraged (mostly with floating-rate debt that is now costing nearly 12%). These companies trade at a dramatic premium to public markets on a GAAP basis, only reaching comparability after massive amounts of pro-forma adjustments. And these are the companies that most likely reflect the better outcomes in private equity. The market and SPAC boom of 2021 presented a window for private equity and venture capital firms to take companies public, and private investors took public what they thought they could. Presumably, what remains in the portfolios was what could not be taken public.

3. Olivine weathering – Campbell Nilsen

When the term ‘carbon sequestration’ comes up, most people think of trees: purchase a carbon credit when booking a flight and, more likely than not, you’ve paid someone to plant a sapling somewhere.

Unfortunately, tree planting has serious disadvantages. Most significantly, its space requirements are immense. To reduce atmospheric CO₂ (currently about 418 ppm) by 100 ppm, within striking distance of the 280 ppm found in preindustrial times, you’d need to convert 900 million hectares to mature forest (an area about 94 percent the size of mainland China and 85 percent the size of Europe).

Even if that was possible, mature forests (which sequester more carbon in their soil than in their trees) take a long time to grow, and much if not most of the land available for reforestation is held by private actors, which creates significant political difficulties.

More promising solutions for direct-air capture4 are more likely to come from chemistry rather than biology. Several companies have broken ground in this field, such as Climeworks, Carbon Engineering and 1PointFive. All use a reusable sorbent, a chemical that reacts with CO₂ in the air and then releases it when energy is supplied (usually when it’s heated up). The captured, concentrated CO₂ is then pumped underground, where it is permanently trapped in geological formations in its gaseous, pressurized form, or mineralized into stable carbonates via reactions with the surrounding rock.

Sorbent-based direct-air capture is not a new idea, and is already used on space stations to moderate CO₂ levels. Like space applications, Climeworks uses an amine sorbent, which releases its captured CO₂ at a relatively low temperature (about 100°C). Unfortunately, amine-based sorbents are extraordinarily expensive – a study on the economics of amine-based sorbents published last year concluded that each tonne of CO₂ captured would incur hundreds of dollars merely in capital expenditure costs for the sorbent. Energy costs are not trivial, either: each tonne sequestered requires no less than 150 kilowatt-hours (kWh).

It is no coincidence that Climeworks operates in Iceland, because its active geology gives Climeworks access to ample carbon-free geothermal and hydro electricity at a very low cost. Even then, Climeworks currently charges €1,000 per tonne of CO₂ sequestered; its eventual goal is €600 a tonne. For comparison, the social cost of each additional tonne of CO₂ is currently thought to be somewhere around $185 (about €170 as of the time of writing), though getting an exact figure is devilishly tricky and the error bars are wide.

1PointFive and Carbon Engineering use potassium hydroxide as the sorbent, which is much cheaper than Climeworks’s amines, but the energy costs are almost as large. To regenerate potassium hydroxide, both companies use a process which includes heating a calciner (steel cylinder) up to 900°C.6 For Carbon Engineering, the cost of producing a concentrated stream of CO₂ was about $100-$200 a tonne as of 2018, not counting the cost of long–term sequestration.

Ultimately, solutions based on reusable sorbents suffer from a key drawback: once carbon dioxide has been absorbed in a chemical reaction, the resulting compound usually won’t give it back up in purified form unless lots of energy is added to the system. Moreover, sorbent-based processes merely produce a concentrated stream of CO₂, which must be stored (usually underground) or used.

This is easy for the first few thousand or even a million tonnes; for billions or trillions of tonnes, the logistics become nightmarish (though possible). Capturing a trillion tonnes of CO₂ (only 40 percent of humanity’s cumulative carbon emissions) via this process would require about eight times the world’s total yearly energy consumption merely to run the calciners. It could be a small useful addition to our carbon mitigation strategy, but it’s unlikely to help us roll back to a preindustrial environment.

If carbon capture with reusable sorbents is astronomically costly, at least for the time being, could we use a non-regenerating sorbent – something that absorbs CO₂ and locks it away for good?

There is a trade-off here. While we’d save the energy costs of cycling the sorbent and storing gaseous CO₂, we’d also need to produce and store truly massive amounts of sorbent. The alternatives would have to be easily available or cheaply manufactured in vast quantities; and because of the storage requirements (reaching into the trillions of tonnes) the compound would need to be non-toxic and environmentally inert. Processing the substance should require relatively little energy, and its reaction with ambient CO₂ needs to operate quickly.

The idea that silicate minerals might be able to fill this role is not, in and of itself, a new one; the earliest proposal of which I am aware is a three–paragraph letter to the editor in the 1990 issue of Nature, proposing that pressurized CO₂ be pumped into a container of water and silicates; five years later, the journal Energy published a somewhat longer outline for carbon sequestration using several intermediate steps. Neither idea went terribly far; popular activism focused on reducing emissions rather than sequestering them, and ideas published in academic journals remained mostly of academic interest.

In 2007, however, the Dutch press began entertaining a rather more sensational idea: the Netherlands’s, and perhaps the world’s, carbon emissions could be effectively and cheaply offset by spreading huge amounts of ground olivine rock – a commonly found, mostly worthless silicate rock composed mainly of forsterite, Mg₂SiO₄ – onto the shores of the North Sea, producing mile after aesthetically intriguing mile of green sand beaches as a side effect. The author of the proposal, Olaf Schuiling, envisioned repurposing thousands of tankers and trucks to ship ground rock from mines in Norway, covering the coast of the North Sea with shimmering golden-green sand and saving the human race from the consequences of the Industrial Revolution.

It seemed too good to be true – so in 2009 the geoscientists Suzanne Hangx and Chris Spiers published a rebuttal. While it was true that ground forsterite has significant sequestration potential on paper (each tonne of forsterite ultimately sequestering 1.25 tonnes of CO₂), Hangx and Spiers concluded that the logistics of Schuiling’s proposal would make the project an unworkable boondoggle.

Start with transport requirements. For the past two decades, the Netherlands has emitted about 170 megatonnes of CO₂ a year on average; each year, around 136 megatonnes of olivine would be needed to sequester Dutch emissions in full. The nearest major olivine mine, Gusdal, is located in Norway, around a thousand kilometers away. Transporting the required olivine by sea with the most commonly-used cargo ship (the $150 million Handysize vessel, with a capacity of about 25 kilotonnes) for example, would require over 100 trips a week – five percent of the world’s Handysize fleet – further clogging some of the world’s busiest waters for shipping. And that’s just for the Netherlands, which is only responsible for about 0.5 percent of the world’s carbon emissions.

Then there’s the environmental angle. While forsterite on its own is harmless, olivine usually contains trace amounts of other minerals and heavy metals, most prominently nickel, whose effect on marine life, while understudied, is known to be less than benign.

But the real Achilles heels of the Schuiling proposal were matters of physics. The rate of rock weathering is, to a first approximation, a function of three variables: the concentration of CO₂ in the water, the ambient temperature, and (most importantly by far) particle size. While CO₂ concentration in surface ocean water is about the same everywhere, temperature is not: sequestration by forsterite is about three times faster at 25°C (the approximate water temperature off the coast of Miami) than at 15°C (the average in the North Sea). But there’s another problem: olivine needs to be extremely small to weather effectively. Hangx and Spiers estimated that olivine particles 300 microns in diameter (the average size of a grain of beach sand) would take about 144 years to finish half their potential sequestration, and seven centuries to react completely…

…But what if the problems with Schuiling’s idea were in the execution, not the concept? The Intergovernmental Panel on Climate Change (or IPCC), the world’s most authoritative body on the problem, takes the climate and atmosphere of 1750 – when the atmosphere was about 280 ppm CO₂ – as its starting point. What would it take to return to this point?

Since that time, humanity has pumped a little over two trillion tonnes of CO₂ into the atmosphere, which would require about 1.6 trillion tonnes of raw olivine to sequester. You can imagine this as a cube measuring about eight kilometers or five miles on each side. Luckily for us, sources of high-quality olivine are fairly common, bordering on ubiquitous; and because it’s not (yet) very economically valuable, most deposits haven’t been thoroughly mapped. Assuming we’re simply trying to speed up natural processes, the end destination for the olivine will likely be the ocean.

Rock weathering takes place only where the rock is exposed to the elements; a gigantic pile of olivine is only as good as its surface area, and the only way to increase surface area is to break the rock into smaller particles. If you halve the size of your particles, the surface area available is doubled at worst, and you sequester carbon at least twice as quickly (the exact proportion will depend on how many cracks and crevices there are in the breakage – the more jagged the particles, the more surface area and the faster sequestration proceeds). To get back to preindustrial concentrations on a time scale of decades, we’d want to process a lot of olivine and break it down into very small particles – not sand, which (with diameters in the hundreds of microns) is too large, but silt (with diameters in the 10-50 micron range).

What would it take to start making a serious dent in atmospheric CO₂? Say we shot for 80 gigatonnes of olivine a year, locking away 100 gigatonnes of the stuff when fully weathered. Unlike many proposals for carbon sequestration, olivine intervention is not contingent on undiscovered or nascent technology. Let’s take a look at the process through the lens of an increasingly small grain of rock.

Our particle of olivine would begin its journey on a morning much like every day of the past hundreds of millions of years; it is part of a large deposit in the hills of Suluwesi, a fifteen-minute drive from the coast. (Indonesia is particularly well-suited for processing due to its vast expanse of shallow, tropical seas, but the ubiquity of olivine formations means that sequestration could happen in any number of places.) 

This particular morning, however, is different. A mining worker has drilled a hole into the exposed surface of the formation, inserted a blasting cap, and – with a loud bang – smashed another fraction of the rock into pieces small enough to be carried by an excavator. The largest excavators in common use, which cost a bit under two million dollars each, can load about 70 tonnes at a time – a small, but important, fraction of the 220 megatonnes or so the world would need to process that day. Each of several hundred excavators takes no more than a minute or so to load up, complete a full trip to the haul truck, and come back to the front lines. It’s probably cheapest to run it, and the rest of the mining equipment, on diesel; even though it guzzles nearly 200 liters (50 gallons) an hour, the rock it carries will repay its five-tonne-a-day CO₂ footprint tens of thousands of times over.

Our grain of olivine (now part of a chunk the size of a briefcase) is off on a quick trip to the main processing facility in one of about a few thousand haul trucks (each costing nearly five million dollars and carrying up to 400 tonnes at a time), where it’s subjected to a thorough pummeling until it’s reached pebble size. Then it’s off to a succession of rock mills to grind it down to the minuscule size needed for it to weather quickly. 

It’s a good idea, at this point, to talk a bit about the main costs involved in such an immense proposal. As a rule of thumb, the smaller you want your end particles to be, the more expensive it is to get them there. Once a suitable olivine formation has been located, quarrying rock out of the formation is cheap. Even in high-income countries like Australia or Canada where mine workers make top-notch salaries, the cost of quarrying rock and crushing it down to gravel size is generally on the order of two to three dollars a tonne, and it requires very little energy. Since reversing global warming would entail the biggest quarrying operation in history, we might well expect costs to drop further. 

Depending on the deposit, haul trucks might prove unnecessary;8 it may be most cost-effective to have the crusher and mills follow the front lines. The wonderful thing about paying people to mill rocks is that we don’t have to know for sure from our armchair; the engineers tasked with keeping expenses to a minimum will figure it out as they go.

What is quite certain is that the vast majority of that expense, both financially and in terms of energy, comes not from mining or crushing but from milling the crushed rock down to particle size. Hangx and Spiers (the olivine skeptics above) estimated milling costs for end particles of various sizes; while sand-sized grains (300 microns across) required around eight kWh of energy per tonne of olivine processed, grains with a diameter of 37 microns were projected to need nearly three times as much energy input, and ten-micron grains a whopping 174 kWh per tonne. Since wholesale electricity prices worldwide are about 15 cents per kWh, that implies an energy cost of around $26 per tonne of olivine, or about $20 per tonne sequestered – at least $1.2 trillion a year, in other words, and a ten percent increase in the world’s electricity consumption. Can we do any better?

We probably can; it matters a lot, it turns out, what kind of rock mill you use. For example, while Hangx and Spiers assumed the use of a stirred media detritor (SMD) mill for the ten-micron silt, other researchers showed that a wet-attrition miller (WAM), working on equal amounts of rock and water, could achieve an average particle size of under four-microns for an all-inclusive energy cost of 61 kWh ($9.15) per tonne of rock – about $7.32 per sequestered tonne of CO₂, or around $732 billion a year in energy costs.

And the largest rock mills are large indeed; the biggest on the market can process tens of thousands of tonnes a day. It should be clear by now that capital expenditures, while not irrelevant, are small compared to the cost of energy. Though there’s no way to know for sure until and unless the sequestration industry reaches maturity, a reasonable upper estimate for capital investment is about $1.60 per tonne of CO₂ sequestered, giving a total cost per sequestered tonne of no more than nine dollars.9 The resulting bill of $900 billion per year might sound gargantuan – but it’s worth remembering that the world economy is a hundred-trillion-dollar-a-year behemoth, and each tonne of carbon dioxide not sequestered is more than 20 times as costly.

Upon its exit from the mill, our particle, now just five to ten microns in diameter, finds itself in a fine slurry, half water by mass. Silicates usually find their way down to the ocean via rivers, so we’ll have to build our own. Thankfully, the water requirements are not high in the grand scheme of things. 80 gigatonnes of rock a year will need about 2300 cubic meters of water a second; split across dozens of mines worldwide, water requirements can easily be met by drawing from rivers or, in a pinch, desalinating ocean water.

The slurry is pumped into a large concrete pipe (since it’s flowing downhill, energy costs are minimal), and our particle of magnesium silicate comes to rest on the ocean floor of the Java Sea, where it reacts with dissolved carbon dioxide and locks it away as magnesium bicarbonate within a few years. (Because the Java Sea is shallow, it is constantly replenished with atmospheric CO₂ from rainwater and ocean currents. Carbon in the deep ocean is cycled at a far slower pace.) 

While there are a handful of trace minerals in most olivine formations, especially nickel and iron, the ecological costs are local and pale in comparison to the global ecological costs of global warming and ocean acidification.

4. Agfa-Gevaert and Activist Investing in Europe – A Case Study – Swen Lorenz

Germany, the largest economy in Continental Europe, makes for an interesting case study. As the annual review of Activist Insight mentions in its 2017 edition: “Germany has long been a laggard in the space of shareholder activism due to both legal and cultural challenges.”

That’s a very diplomatic way of putting it. Legal scholars with a knack for history will point to a much juicier origin of the problem.

The reason why it had long been tremendously tricky to hold German boards to account for underperformance, dates back to the legal system established by the Nazis. Germany’s first extensive corporate law was written in 1937, and the new legal code’s approach to managing corporations was based on the “Fuehrer principle” (Führerprinzip).

Anyone who wants to study the relevant history should get a copy of “Aktienrecht im Wandel” (roughly: “Corporate Law during changing times”), the definitive two-volume book covering the last 200 years of German commercial law.

The Nazis specifically wanted to create a corporate law designed to:

  • Fend off “the operational and economic damage caused by anonymous, powerful capitalists”.
  • Enable directors to manage companies “for the benefit of the enterprise, the people, and the Reich”.
  • “Push back the power of the shareholders meeting”.

The Nazis lost the war, but the legal system underpinning German corporations and much of the underlying culture remained in place. It was only in 1965 that Germany’s corporate law was significantly reformed, primarily because of one man’s outrageously broad influence over leading German corporations: Hermann Josef Abs, who had been a director of Deutsche Bank since 1938.

During the years of Germany’s so-called economic miracle, Abs had created an impenetrable network of cross-holdings among companies and directorship positions doled out among a small clique of leading figures. This powerful elite of directors shielded each other from accountability; even investors with large-scale financial firepower found many German companies an impenetrable fortress. Germany’s government had no other choice but to (finally) act. The Lex Abs, as the legal reform was called in a rare legislative reference to one specific individual, did away with at least some of the corporate law’s problematic aspects.

Changing the legal code was one thing, changing the underlying culture another. So powerful and deeply-rooted was Abs & Co.’s system that I came across its influence on the German stock market as recently as the late 1990s. Germany’s large, publicly listed corporations used to be a closed shop, summarised by the expression “Deutschland AG” in foreign media.

It was only during the early 2000s that shareholder activism slowly started to become a more regular occurrence in Germany and across Continental Europe. Factors such as a generational change on boards, further legislative reforms, and a large number of newly listed companies managed by internationally trained directors and entrepreneurs led to an increased prevalence of the activist approach.

Once you join the dots from a 30,000 foot perspective and with the benefit of hindsight, it’s incredible how long it takes to soften up a well-entrenched system. Quite literally, it required the generation who had created the system to die.

5. Walking naturally after spinal cord injury using a brain–spine interface – [Numerous authors]

A spinal cord injury interrupts the communication between the brain and the region of the spinal cord that produces walking, leading to paralysis1,2. Here, we restored this communication with a digital bridge between the brain and spinal cord that enabled an individual with chronic tetraplegia to stand and walk naturally in community settings. This brain–spine interface (BSI) consists of fully implanted recording and stimulation systems that establish a direct link between cortical signals3 and the analogue modulation of epidural electrical stimulation targeting the spinal cord regions involved in the production of walking4,5,6. A highly reliable BSI is calibrated within a few minutes. This reliability has remained stable over one year, including during independent use at home. The participant reports that the BSI enables natural control over the movements of his legs to stand, walk, climb stairs and even traverse complex terrains. Moreover, neurorehabilitation supported by the BSI improved neurological recovery. The participant regained the ability to walk with crutches overground even when the BSI was switched off. This digital bridge establishes a framework to restore natural control of movement after paralysis…

…To establish this digital bridge, we integrated two fully implanted systems that enable recording of cortical activity and stimulation of the lumbosacral spinal cord wirelessly and in real time (Fig. 1a).

To monitor electrocorticographic (ECoG) signals from the sensorimotor cortex, we leveraged the WIMAGINE technology3,20. WIMAGINE implants consist of an 8-by-8 grid of 64 electrodes (4 mm × 4.5 mm pitch in anteroposterior and mediolateral axes, respectively) and recording electronics that are embedded within a 50 mm diameter, circular-shaped titanium case that has the same thickness as the skull. The geometry of the system favours close and stable contact between the electrodes and the dura mater, and renders the devices invisible once implanted within the skull.

Two external antennas are embedded within a personalized headset that ensures reliable coupling with the implants. The first antenna powers the implanted electronics through inductive coupling (high frequency, 13.56 MHz), whereas the second, ultrahigh frequency antenna (UHF, 402–405 MHz) transfers ECoG signals in real time to a portable base station and processing unit, which generates online predictions of motor intentions on the basis of these signals (Extended Data Fig. 1).

The decoded motor intentions are then converted into stimulation commands that are transferred to tailored software running on the same processing unit.

These commands are delivered to the ACTIVA RC implantable pulse generator (Fig. 1a), which is commonly used to deliver deep brain stimulation in patients with Parkinson’s disease. We upgraded this implant with wireless communication modules that enabled real-time adjustment over the location and timing of epidural electrical stimulation with a latency of about 100 ms (Extended Data Fig. 1).

Electrical currents are then delivered to the targeted dorsal root entry zones using the Specify 5-6-5 implantable paddle lead, which consists of an array incorporating 16 electrodes.

This integrated chain of hardware and software established a wireless digital bridge between the brain and the spinal cord: a brain–spine interface (BSI) that converts cortical activity into the analogue modulation of epidural electrical stimulation programs to tune lower limb muscle activation, and thus regain standing and walking after paralysis due to a spinal cord injury (Supplementary Video 1)


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Apple. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com