What We’re Reading (Week Ending 26 November 2023)

What We’re Reading (Week Ending 26 November 2023) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 26 November 2023):

1. 11 Signs to Avoid Management Meltdowns – Todd Wenning

Pressure to maintain those numbers

Anyone who’s made it to the C-suite understands that missing Wall Street estimates can result in a stock price drop. There’s natural pressure to satisfy investors, particularly when the stock price drives a big part of employee compensation.

Some of that pressure can be good, but it can also lead to unethical decisions when a company can’t achieve those numbers in the ordinary course of business. A company may, for example, stuff a channel with inventory to pull forward demand. That can work for a while, but eventually, all the customers’ warehouses are full.

Companies might also make an acquisition, alter segment reporting, or take some type of restructuring initiative to reset investor expectations. These moves should be viewed with skepticism…

Young’uns and bigger-than-life CEOs

An iconic CEO who surrounds themselves with young, ambitious employees can be a warning sign. Jennings argues that’s because:

“These young’uns don’t have enough experience or wisdom to challenge the CEO, and the CEO has roped them in with executive success. They are hooked on the cash and its trappings and cannot speak up about obvious ethical and legal issues because they might lose the homes, the boats, the cars, and, yes, the prestige that comes with astronomical financial success at a young age.”

In contrast, when a CEO has an experienced team who – critically – have financial and professional options other than working at the company, it’s far less likely (though not impossible) for misbehavior to persist for long…

Innovation like no other

As technological advances accelerate, we’re more frequently dazzled by their potential impacts. Jennings warns that companies behind these technologies may consider themselves “as being above the fray, below the radar, and generally not subject to the laws of either economics or gravity.”

Founders and executives of tremendously successful companies often receive accolades from the business and financial media, as well as their local communities. In turn, this feedback can create an inflated sense of self-importance.

To illustrate, here’s a clip from a December 2000 press release announcing that Fortune magazine named Enron one of the 100 best companies to work for in America.

Enron adds the “100 Best Companies to Work For in America” distinction to its “Most Innovative Company in America” accolade, which it has received from Fortune magazine for the past five years. The magazine also has named Enron the top company for “Quality of Management and the second best company for “Employee Talent.”

When a company gets this type of public reinforcement, it can provide mental cover for justifying other actions.

As an antidote to this red flag, Jennings suggests being on the lookout for how management responds to external questions about the company, its performance, or its tremendous growth. If rather than thoughtfully respond to a tough question, management launches an ad hominem attack against the questioner, be on your guard…

Obsession with short sellers: If the company has been the target of a well-distributed short thesis, there are two appropriate responses for the types of companies we want to own. One is ignore it and focus on the business. In 1992, when Fastenal founder Bob Kierlin was asked about the huge short interest in his stock, he replied: “I’ve got nothing against short sellers…They have a role in the market place, too. My own portfolio has a couple of short positions. In the long run, the truth will always come out.” The second is to calmly and thoughtfully respond to short seller concerns like Netflix’s Reed Hastings did in reply to Whitney Tilson. Any other type of response – particularly when it’s driven by emotion – is a warning sign.

2. Waking up science’s sleeping beauties – Ulkar Aghayeva

Some scientific papers receive very little attention after their publication – some, indeed, receive no attention whatsoever. Others, though, can languish with few citations for years or decades, but are eventually rediscovered and become highly cited. These are the so-called ‘sleeping beauties’ of science.

The reasons for their hibernation vary. Sometimes it is because contemporaneous scientists lack the tools or practical technology to test the idea. Other times, the scientific community does not understand or appreciate what has been discovered, perhaps because of a lack of theory. Yet other times it’s a more sublunary reason: the paper is simply published somewhere obscure and it never makes its way to the right readers…

…The term sleeping beauties was coined by Anthony van Raan, a researcher in quantitative studies of science, in 2004. In his study, he identified sleeping beauties between 1980 and 2000 based on three criteria: first, the length of their ‘sleep’ during which they received few if any citations. Second, the depth of that sleep – the average number of citations during the sleeping period. And third, the intensity of their awakening – the number of citations that came in the four years after the sleeping period ended. Equipped with (somewhat arbitrarily chosen) thresholds for these criteria, van Raan identified sleeping beauties at a rate of about 0.01 percent of all published papers in a given year.

Later studies hinted that sleeping beauties are even more common than that. A systematic study in 2015, using data from 384,649 papers published in American Physical Society journals, along with 22,379,244 papers from the search engine Web of Science, found a wide, continuous range of delayed recognition of papers in all scientific fields. This increases the estimate of the percentage of sleeping beauties at least 100-fold compared to van Raan’s.

Many of those papers became highly influential many decades after their publication – far longer than the typical time windows for measuring citation impact. For example, Herbert Freundlich’s paper ‘Concerning Adsorption in Solutions’ (though its original title is in German) was published in 1907, but began being regularly cited in the early 2000s due to its relevance to new water purification technologies. William Hummers and Richard Offeman’s ‘Preparation of Graphitic Oxide’, published in 1958, also didn’t ‘awaken’ until the 2000s: in this case because it was very relevant to the creation of the soon-to-be Nobel Prize–winning material graphene.

Both of these examples are from ‘hard’ sciences – and interestingly, in physics, chemistry, and mathematics, sleeping beauties seem to occur at higher rates than in other scientific fields.

Indeed, one of the most famous physics papers, Albert Einstein, Boris Podolsky, and Nathan Rosen (EPR)’s ‘Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ (1935) is a classic example of a sleeping beauty. It’s number 14 on one list that quantifies sleeping beauties by how long they slept and how many citations they suddenly accrued…

…The EPR paper wasn’t hidden in a third-tier journal, unread by the scientific community. Indeed, it generated intense debate, even a New York Times headline. But in terms of its citations, it was a sleeper: it received many fewer citations than one would expect because it needed testing, but that testing wasn’t feasible for a long time afterward…

…In some cases, a sleeping beauty comes without the kind of great mystery attached to the EPR paper. In some cases, scientists understand something well enough – but just don’t know what to do with it.

The first report of the green fluorescent protein (GFP) – a crucial ingredient in many modern biological experiments because of its ability glow brightly under ultraviolet light, and thus act as a clear indicator of cellular processes like gene expression and protein dynamics – was published in 1962 in the Journal of Cellular and Comparative Physiology. GFP had been discovered in the jellyfish Aequorea victoria in research led by the marine biologist Osamu Shimomura.

Over the summers of the following 19 years, 85,000 A. victoria jellyfish were caught off Friday Harbor in Washington state in attempts to isolate sufficient amounts of GFP that allowed for a more thorough characterization. This resulted in a series of papers between 1974 and 1979. But as Shimomura admitted in one of the interviews many years later, ‘I didn’t know any use of . . . that fluorescent protein, at that time.’

In 1992, things changed. The protein was cloned, and the relevant genetic information was passed on to the biologist Martin Chalfie. Chalfie was first to come up with the idea of expressing GFP transgenically in E. coli bacteria and C. elegans worms. He demonstrated that GFP could be used as a fluorescent marker in living organisms, opening up new worlds of experimentation. GFP is now a routinely used tool across swathes of cell biology…

…With that caveat on the record, we can look at a final example of a true sleeping beauty – one that perhaps has the most to teach us about how to awaken dormant knowledge in science.

In 1911, the pathologist Francis Peyton Rous published a paper in which he reported that when he injected a healthy chicken with a filtered tumor extract from a cancerous chicken, the healthy chicken developed a sarcoma (a type of cancer affecting connective tissue). The extract had been carefully filtered to remove any host cells and bacteria, which might be expected to cause cancer, so another factor must have been at play to explain the contagious cancer.

It turned out that the cause of the tumor in the injected chicken was a virus – but Rous wasn’t able to isolate it at the time.

The importance of his study, and the paper reporting it, wasn’t recognized until after 1951, when a murine leukemia virus was isolated. This opened the door to the era of tumor virology – and to many citations for Rous’s initial paper. The virus Rous had unknowingly discovered in his 1911 paper became known as the Rous sarcoma virus (RSV), and Rous was awarded the Nobel Prize in Medicine in 1966, 55 years after publishing…

…Another lesson is related to collaboration. It could be that the techniques and knowledge required to fully exploit a discovery in one field lie, partly or wholly, in an entirely different one. A study from 2022 showed empirically how the ‘distance’ between biomedical findings – whether they were from similar subfields or ones that generally never cite each other – determines whether they tend to be combined to form new knowledge.

‘Biomedical scientists’, as the paper’s author, data scientist Raul Rodriguez-Esteban, put it, ‘appear to have a wide set of facts available, from which they only end up publishing discoveries about a small subset’. Perhaps understandably, they tend to ‘reach more often for facts that are closer’. Encouraging interdisciplinary collaboration, and encouraging scientists to keep an open mind about who they might work with, could help extend that reach.

That, of course, is easier said than done. Perhaps the most modern tools we have available – namely, powerful AI systems – could help us. It is possible to train an AI to escape the disciplinary lines of universities, instead generating ‘alien’, yet scientifically plausible, hypotheses from across the entire scientific literature.

These might be based, for example, on the identification of unstudied pairs of scientific concepts, unlikely to be imagined by human scientists in the near future. It’s already been shown in research on natural language processing that a purely textual analysis of published studies could potentially glean gene-disease associations or drug targets years before a human, or a human-led analysis, would discover them.

3. An Interview with Intel CEO Pat Gelsinger About Intel’s Progress Towards Process Leadership – Ben Thompson and Pat Gelsinger

Well, tell me more about that. Why is advanced packaging the future? I know this has been a big focus for Intel, it’s something you want to talk about, and from everything I know you have good reason to want talk about it, your technology is leading the way. Why is that so important in addition to the traditional Moore’s Law shrinking transistors, et cetera? Why do we need to start stacking these chiplets?

PG: Well, there’s about ten good reasons here, Ben.

Give me the top ones.

PG: I’ll give you the top few. One is, obviously, one of your last pieces talked about the economics of Moore’s Law on the leading edge node, well now you’re able to take the performance-sensitive transistors and move them to the leading edge node, but leverage some other technologies for other things, power delivery, graphics, IP sensitive, I/O sensitive, so you get to mix-and-match technologies more effectively this way.

Second, we can actually compose the chiplets to more appropriate die sizes to maximize defect density as well, and particularly we get to some of the bigger server chips, if you have a monster server die, well, you’re going to be dictated to be n-2, n-3, just because of the monster server die size. Now I get to carve up that server chip and leverage it more effectively on a 3D construct. So I get to move the advanced nodes for computing more rapidly and not be subject to some of the issues, defect density, early in the life of a new process technology. Additionally, we’re starting to run into some different scaling aspects of Moore’s Law as well.

Right.

PG: SRAMs in particular, SRAM scaling will become a bigger and bigger issue going forward. So I actually don’t get benefit by moving a lot of my cache to the next generation node like I do for logic, power, and performance. I actually want to have a 3D construct where I have lots of cache in a base die, and put the advanced computing on top of it into a 3D sandwich, and now you get the best of a cache architecture and the best of the next generation of Moore’s law so it actually creates a much more effective architectural model in the future. Additionally, generally, you’re struggling with the power performance and speed of light between chips.

Right. So how do you solve that with the chiplet when they’re no longer on the same die?

PG: Well, all of a sudden, in the chiplet construct, we’re going to be able to put tens of thousands of bond connections between different chiplets inside of an advanced package. So you’re going to be able to have very high bandwidth, low latency, low power consumption interfaces between chiplets. Racks become systems, systems become chips in this architecture, so it becomes actually a very natural scaling element as we look forward, and it also becomes very economic for design cycles. Hey, I can design a chiplet with this I/O…

The yield advantages of doing smaller dedicated chiplets instead of these huge chips is super obvious, but are there increased yield challenges from putting in all these tens of thousands of bonds between the chips, or is it just a much simpler manufacturing problem that makes up for whatever challenges there might be otherwise?

PG: Clearly, there are challenges here, and that’s another area that Intel actually has some quite unique advantages. One of these, we can do singulated die testing. Literally, we can carve up and do testing at the individual chiplet level before you actually get to a package, so you’re able to take very high yielding chiplets into the rest of the manufacturing process. If you couldn’t do that, now you’re subject to the order of effects of being able to have defects across individual dies, so you need to be able to have very high yielding individual chiplets, you need to be able to test those at temperature as well, so you really can produce a good, and then you need a high yielding manufacturing process as you bring them into an advanced substrate.

Why is Intel differentiated in this? What is your advantage that you think is sustainable? You’ve talked about it already being the initial driver of your Foundry Service.

PG: Yeah, it’s a variety of things. We’ve been building big dies and servers for quite a while, so we have a lot of big die expertise. Our advanced packaging with Foveros and EMIB (Embedded Multi-Die Interconnect Bridge) and now hybrid bonding and direct copper-to-copper interface, we see ourself as years ahead of other technology providers in the industry. Also, as an integrated IDM, we were developing many of these testing techniques already. So we have our unique testers that we have that allow us to do many of these things in high yield production today at scale, and we’re now building factories that allow us to do wafer level assembly, multi-die environment.

So it really brings together many of the things that Intel is doing as an IDM, now bringing it together in a heterogeneous environment where we’re taking TSMC dies. We’re going to be using other foundries in the industry, we’re standardizing that with UCIe. So I really see ourself as the front end of this multi-chip chiplet world doing so in the Intel way, standardizing it for the industry’s participation with UCIE, and then just winning a better technology…

And then RibbonFET and number one, explain them to me and to my readers. And number two, why are they so important? And number three, which one is more important? Are they inextricably linked?

PG: Yeah, PowerVia, let’s start with the easier one first. Basically, when you look at a metal stack up and a modern process, leading edge technology might have fifteen to twenty metal layers. Metal one, metal two…

And the transistors all the way at the bottom.

PG: Right, and transistors down here. So it’s just an incredible skyscraper design. Well, the top level of metals is almost entirely used for power delivery, so now you have to take signals and weave them up through this lattice. And then you want big fat metals and why do you want them fat? So you get good RC characteristics, you don’t get inductance, you’re able to have low IR drop right across these big dies.

But then you get lots of interference.

PG: Yeah and then they’re screwing up your metal routing that you want for all of your signals. So the idea of taking them from the top and using wafer-level assembly and moving them to the bottom is magic, right? It really is one of those things, where the first time I saw this laid out, as a former chip designer, I was like, “Hallelujah!”, because now you’re not struggling with a lot of the overall topology, die planning considerations, and you’re going to get better metal characterization because now I can make them really fat, really big and right where I want them at the transistor, so this is really pretty powerful. And as we’ve done a lot of layout designs now we get both layout efficiency because all of my signal routing get better, I’m able to make my power delivery and clock networks far more effectively this way, and get better IR characteristics. So less voltage drops, less guard banding requirements, so it ends up being performance and area and design efficiency because the EDA tools —

Right. It just becomes much simpler.

PG: Everybody loves this. That’s PowerVia, and this is really an Intel innovation. The industry looked at this and said, “Wow, these guys are years ahead of anything else”, and now everybody else is racing to catch up, and so this is one where I say, “Man, we are ahead by years over the industry for backside power or PowerVia, and everybody’s racing to get their version up and running, and we’re already well underway and in our second and third generation of innovation here”.

On the transistor, that’s the gate-all-around (GAA) or we call it RibbonFET, our particular formulation for that. Samsung and TSMC have their variation of that, so I’ll say on PowerVia, well ahead, while everybody’s working on GAA and you can say, “Why is Intel better?”, well hey, when you’ve done every major transistor innovation for the last twenty-five years…

Just to step back and look back over your two to three years, in our previous interview we talked about the importance of the Foundry business being separate from the product business. This is something that I was very anchored on looking at your announcement, and it’s why I was excited about Meteor Lake for example, because to me that was a forcing function for Intel to separate the manufacturing and design parts. At the same time, you are not actually unveiling a separate P&L for it until early next year. What took so long? Was that the area where maybe you actually were moving too slowly?

PG: Well, when you say something like separate the P&L, it’s sort of like Intel hasn’t done this in almost our 60-year history. The idea that we’re going to run fully separate operations with fully separate financials, with fully separate allocation systems at ERP and financial levels, I joke internally, Ben, that the ERP and finance systems of Intel were old when I left, that was thirteen years ago and we are rebuilding all of the corporate systems that we sedimented into IDM 1.0 over a five-decade period.

Tearing all of that apart into truly separate operational disciplines as a fabless company and as a foundry company, that’s a lot of work. Do I wish it could have gone faster? Of course I do, but I wasn’t naive to say, “Wow, I can go make this happen really fast.” It was five decades of operational processes, and internally we’ll be by the time we publish the financials in Q1 of next year, we’ll have gone through multiple quarters of trial running those internally, and now that’ll be the first time that we present it to the Street that way.

As we talk to Foundry customers we’re saying, “Come on in, let’s show you what we’re doing, test us.” And MediaTek, one of our early Foundry customers, “Hey, give us the feedback, give me the scorecard. How am I doing? What else do you need to see?”, start giving us the NPS scores for us as a Foundry customer, there’s a lot of work here. Yeah, I wish it would go faster but no, I’m not disappointed that it’s taken this long…

I mean, you’ve pushed vigorously, I would say, for the CHIPS Act and there was actually just a story in the Wall Street Journal, I saw it as I was driving in, that said Intel is the leading candidate for money for a national defense focused foundry, a secure enclave I think they called it, potentially in Arizona. But you mentioned the money aspect of being a foundry, and you have to be the first customer, but you’re the first customer with an also-threatened business that has— you talked about your earnings, you’re not filling your fabs currently as it is, and you don’t have trailing edge fabs spinning off cash to do this, you don’t have a customer base. Is this a situation where, “Look, if the US wants process leadership, we admit we screwed up, but we need help”?

PG: There are two things here. One is, hey, yeah, we realize that our business and our balance sheet, cash flows are not where they need to be. At the same time, there’s a fundamental economic disadvantage to build in US or Europe and the ecosystem that has emerged here (Taiwan), it’s lower cost.

Right. Which TSMC could tell you.

PG: Right. And hey, you look at some of the press that’s come out around their choice of building in the US, there’s grave concerns on their part of some of those cost gaps. The CHIPS Act is designed to close those cost gaps and I’m not asking for handouts by any means, but I’m saying for me to economically build major manufacturing in US and Europe, those cost gaps must be closed, because if I’m going to plunk down $30 billion for a major new manufacturing facility and out of the gate, I’m at a 30%, 40% cost disadvantage —

Even without the customer acquisition challenges or whatever it might be.

PG: At that point, no shareholders should look at me and say, “Please build more in the US or Europe.” They should say, “Well, move to Asia where the ecosystem is more mature and it’s more cost-effective to build.” That’s what the CHIPS Act was about: if we want balanced, resilient supply chains, we must close that economic gap so that we can build in the US and Europe as we have been. And trust me, I am fixing our issues but otherwise, I should go build in Asia as well, and I don’t think that’s the right thing for the world. We need balanced supply chains that are resilient for the Americas and for Europe and in Asia to have this most important resource delivered through supply chains around the world. That’s what the CHIPS Act was about.

I am concerned though. My big concern, just to put my cards on the table, is the trailing edge, where it’s basically Taiwan and China, and obviously China has its own issues, but if Taiwan were taken off the map, suddenly, part of what motivated the CHIPS Act was we couldn’t get chips for cars. Those are not 18A chips, maybe those will go into self-driving cars, I don’t want to muddy the waters, but that’s an issue where there’s no economic case to build a trailing edge fab today. Isn’t that a better use of government resources?

PG: Well, I disagree with that being a better use of resource, but I also don’t think it’s a singular use of resource on leading edge. And let me tease that apart a little bit. The first thing would be how many 28 nanometer fabs should I be building new today?

Economically, zero.

PG: Right, yeah, and I should be building zero economically in Asia as well.

Right. But China is going to because at least they can.

PG: Exactly. The economics are being contorted by export policy, not because it’s a good economic investment as well.

Right. And that’s my big concern about this policy, which is if China actually approaches this problem rationally, they should flood the market like the Japanese did in memory 40 years ago.

PG: For older nodes.

For older nodes, that’s right.

PG: Yeah because that’s what they’re able to go do and that does concern me as well. At the same time, as we go forward, how many people are going to be designing major new designs on 28 nanometers? Well, no. They’re going to be looking at 12 nanometers and then they’re going to be looking at 7 nanometers and eventually they will be moving their designs forward, and since it takes seven years for one of these new facilities to both be built, come online, become fully operational in that scale, let’s not shoot behind the duck.

And so your sense is that you are going to keep all these 12, 14 nanometer fabs online, they’re going to be fully depreciated. Even if there was a time period where it felt like 20 nanometer was a tipping point as far as economics, a fully depreciated 14 nanometer fab—

PG: And I’m going to be capturing more of that because even our fab network, I have a whole lot of 10 nanometer capacity. I’m going to fill that with something, I promise you, and it’s going to be deals like we just did with Tower. We’re going to do other things to fill in those as well because the depreciated assets will be filled. I’m going to run those factories forever from my perspective, and I’ll find good technologies to fill them in.

Let’s talk about AI. I know we’re running short on time, but there’s the question. I feel like AI is a great thing for Intel, despite the fact everyone is thinking about it being GPU-centric. On one hand, Nvidia is supply constrained and so you’re getting wins. I mean, you said Gaudi is supply constrained, which is not necessarily as fast as an Nvidia chip, I think, is safe to say. But I think the bull case, and you articulated this in your earnings call, is AI moving to the edge. Tell me this case and why it’s a good thing for Intel.

PG: Well, first I do think AI moves to the edge and there are two reasons for that. One is how many people build weather models? How many people use weather models? That’s training versus inference, the game will be in inference. How do we use AI models over time? And that’ll be the case in the cloud, that’ll be the case in the data center, but we see the AI uses versus the AI training becoming the dominant workload as we go into next year and beyond. The excitement of building your own model versus, “Okay, now we build it. Now what do we do with it?”

And why does Intel win that as opposed to GPUs?

PG: For that then you say, in the data center, you say, “Hey, we’re going to add AI capabilities.” And now gen four, Sapphire Rapids is a really pretty good inferencing machine, you just saw that announced by Naver in Korea. The economics there, I don’t now have to port my application, you get good AI performance on the portion of the workload where you’re inferencing, but you have all the benefits of the software ecosystem for the whole application.

But importantly, I think edge and client AI is governed by the three laws. The laws of economics: it is cheaper to do it on the client versus in the cloud. The laws of physics: it is faster to do it on the client versus round tripping your data to the cloud. And the third is the laws of the land: do I have data privacy? So for those three reasons, I believe there’s all going to be this push to inferencing to the edge and to the client and that’s where I think the action comes. That’s why Meteor Lake and the AIPC is something— 

4. Slicing and Dicing: How Apollo is Creating a Deconstructed Bank – Marc Rubinstein

Securitisation as a technology changed finance. By allowing loans to be repackaged for resale, it paved the way for the disintegration of the traditional value chain that cleaved loan origination to funding sources.

The basic form of an asset-backed security goes back a long time, but the modern-day version was born 40 years ago when First Boston, Salomon Brothers and Freddie Mac divided up residential mortgage pools into different tranches that allowed bondholders to be paid at different rates. Investors could choose between buying more expensive, higher rated bonds backed by tranches with first claim on payment flows, or purchasing subordinated bonds that were less expensive, lower rated and riskier. This technique helped the mortgage-backed securities market grow from $30 billion in 1982 to $265 billion in 1986.

The market soon spread, moving beyond mortgages in the 1980s to include student loans, auto loans and credit card receivables. Eventually, issuers securitised more exotic revenue streams, creating, for example, Bowie bonds securitised by revenues from David Bowie’s back catalogue and even Bond bonds securitised by revenues from James Bond movies. Market growth was aided by a friendly regulatory environment, improvements in computing power and new information technologies. With increasing precision, the risks and revenues associated with debts could be identified, catalogued, isolated and sold.

The securitisation process involves a chain of participants. In a stylised version, a borrower sits at one end and takes out a loan from an originator. The originator then sells the loan into a special purpose entity which issues bonds against it, with the help of an underwriter. To provide originators with liquidity, banks offer warehouse facilities which act as a kind of institutional credit card, allowing them to finance pre-agreed eligible assets. The underwriter manages the sale of bonds to investors. To ensure payment flows continue uninterrupted, a servicer sits underneath the process, collecting cash from the borrower and passing it through to the investor…

…In its 13 years of experience investing in asset-backed securities, Apollo has deployed over $200 billion of capital. Its annualised loss rate: just 1.3 basis points. 

But Apollo reckons that it could deploy more if only it had access to more origination. “There is no shortage of capital,” said CEO Marc Rowan at an investor day two years ago. “What there is, is a shortage of assets.”

Hence, the firm has reversed back down the value chain into direct origination. And because it’s not necessarily able to invest in everything its origination platforms throw off, it has built up a capital solutions group as well, to distribute asset-backed loans to other market participants. Apollo also recently lifted a business from Credit Suisse which it has renamed Atlas SP that offers warehouse facilities, securitisation and syndication to other originators. So, like a deconstructed bank, it now operates right across the value chain in a fairly unique way.

Apollo currently operates 16 different origination engines. They operate as stand-alone companies focused on their particular niche, independently capitalised and with their own management and board of directors. In total, the firm has invested around $8 billion of equity capital into these businesses; they collectively manage $130 billion of assets and employ 3,900 staff. The companies are at different stages of maturity: Seven manage less than $2 billion of assets, including two that Apollo launched de-novo; six manage between $2 billion and $10 billion of assets; and three manage in excess of $20 billion…

…The problem with operating a range of origination platforms is that their track record – at least as public businesses – is not very good. Origination businesses need to manage two risks: liquidity risk and credit risk.

Historically, liquidity risk has brought many down. Their reliance on market funding sources entwines their fortunes with market sentiment, and markets can be skittish. Following the Russian debt crisis in 1998, market disruption led to a steep fall in demand among investors for risky assets, including subprime securitizations, even before a recession took hold three years later. Subprime originators saw their own borrowing costs skyrocket. In the two years following the crisis, eight of the top 10 subprime lenders declared bankruptcy, ceased operations or sold out to stronger firms…

…Apollo argues that its long-term insurance liabilities are a better match for asset financing than commercial paper, money markets or even a bank’s deposits. The firm may have a point. Deposit outflows at Silicon Valley Bank, Signature Bank and First Republic highlight that bank funding isn’t what it was and that its realised duration may be lower than anticipated.

The second risk is credit risk. Apollo reckons its diversification helps – across originators and across asset types. Its platforms operate over 30 product lines and each deploys a large funnel. Since being founded in 2008, MidCap has closed only 2,000 deals out of around 29,000 identified opportunities, on which it issued 6,800 term sheets. Overall, the group’s platforms target a conversion of between 5% and 10% of opportunities. Such a large funnel avoids adverse selection.

5. Everything You Can’t Predict – Jack Raines

You would be hard-pressed to find a technological development from the last 20 years that is more important than “the cloud.”…

…Interestingly, the first company to launch an enterprise cloud solution wasn’t Amazon, Microsoft, or Google.

It was IBM.

Yes, IBM, whose stock price appreciated by a whopping 2.39% between August 2000 and April 2023, was the first entrant to the cloud space.

So, what went wrong?

IBM was the face of the computer industry for most of the 20th century, and in July 2002, they unveiled a service called Linux Virtual Services, which offered customers a radical new payment structure.

Historically, IBM had locked customers into long-term, fixed-price contracts in exchange for access to different hardware and software products. In contrast, Linux Virtual Services would allow customers to run their own software applications through IBM mainframes and pay based on their usage.

According to IBM, this usage-based payment model would result in savings of 20% – 55%.

Linux Virtual Services should have kickstarted a proliferation of cloud-based services, but instead, it was shut down a few years later…

…In 2002, IBM, a $130B computing giant with unlimited resources and a multi-decade head start, launched an enterprise cloud offering aimed at commoditizing computing power.

In 2002, Amazon, a $10B e-commerce store, was solving an internal engineering bottleneck.

In 2006, IBM shut down its cloud storage service.

In 2006, Amazon launched its cloud storage service.

In 2023, IBM is still worth $130B.

In 2023, Amazon is worth 10 IBMs, largely due to the success of AWS.

What went wrong at IBM? No one really knows, but Corry Wang, a former equity researcher at Bernstein, speculates that IBM’s sales team may have had misaligned incentives. In 2002, sales teams would have earned larger commissions on higher-priced, fixed contracts than on cheaper, usage-based contracts, and the new offerings would have cannibalized current customers as well. Why, as a salesperson, would you sell a service that made you less money?

Meanwhile, Amazon realized, almost by accident, that their internal solutions to infrastructure bottlenecks could be exported and sold as services. And Amazon didn’t have current SaaS customers to worry about cannibalizing, so their salespeople were free to sell the service to anyone.

16 years later, Amazon is the market leader in cloud, and IBM is stuck in 2006…

…Because it shows that predicting the future is easy, but predicting who wins in that future is much, much more difficult. By 2001, plenty of tech experts could have told you that cloud computing was going to emerge as an important technological development in a decade.

But how many of those experts would have predicted that an online bookstore would dominate the cloud market?

Picking trends is easy. Picking winners is hard.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google), Amazon, Microsoft, Netflix, and TSMC. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com