What We’re Reading (Week Ending 01 October 2023)

What We’re Reading (Week Ending 01 October 2023) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 01 October 2023):

1. How scientists are using artificial intelligence – The Economist

In 2019, scientists at the Massachusetts Institute of Technology (MIT) did something unusual in modern medicine—they found a new antibiotic, halicin. In May this year another team found a second antibiotic, abaucin. What marked these two compounds out was not only their potential for use against two of the most dangerous known antibiotic-resistant bacteria, but also how they were identified.

In both cases, the researchers had used an artificial-intelligence (AI) model to search through millions of candidate compounds to identify those that would work best against each “superbug”. The model had been trained on the chemical structures of a few thousand known antibiotics and how well (or not) they had worked against the bugs in the lab. During this training the model had worked out links between chemical structures and success at damaging bacteria. Once the AI spat out its shortlist, the scientists tested them in the lab and identified their antibiotics. If discovering new drugs is like searching for a needle in a haystack, says Regina Barzilay, a computer scientist at MIT who helped to find abaucin and halicin, AI acts like a metal detector. To get the candidate drugs from lab to clinic will take many years of medical trials. But there is no doubt that AI accelerated the initial trial-and-error part of the process…

…In materials science, for example, the problem is similar to that in drug discovery—there are an unfathomable number of possible compounds. When researchers at the University of Liverpool were looking for materials that would have the very specific properties required to build better batteries, they used an AI model known as an “autoencoder” to search through all 200,000 of the known, stable crystalline compounds in the Inorganic Crystal Structure Database, the world’s largest such repository. The AI had previously learned the most important physical and chemical properties required for the new battery material to achieve its goals and applied those conditions to the search. It successfully reduced the pool of candidates for scientists to test in the lab from thousands to just five, saving time and money.

The final candidate—a material combining lithium, tin, sulphur and chlorine—was novel, though it is too soon to tell whether or not it will work commercially. The AI method, however, is being used by researchers to discover other sorts of new materials…

…The shapes into which proteins twist themselves after they are made in a cell are vital to making them work. Scientists do not yet know how proteins fold. But in 2021, Google DeepMind developed AlphaFold, a model that had taught itself to predict the structure of a protein from its amino-acid sequence alone. Since it was released, AlphaFold has produced a database of more than 200m predicted protein structures, which has already been used by over 1.2m researchers. For example, Matthew Higgins, a biochemist at the University of Oxford, used AlphaFold to figure out the shape of a protein in mosquitoes that is important for the malaria parasite that the insects often carry. He was then able to combine the predictions from AlphaFold to work out which parts of the protein would be the easiest to target with a drug. Another team used AlphaFold to find—in just 30 days—the structure of a protein that influences how a type of liver cancer proliferates, thereby opening the door to designing a new targeted treatment.

AlphaFold has also contributed to the understanding of other bits of biology. The nucleus of a cell, for example, has gates to bring in material to produce proteins. A few years ago, scientists knew the gates existed, but knew little about their structure. Using AlphaFold, scientists predicted the structure and contributed to understanding about the internal mechanisms of the cell. “We don’t really completely understand how [the AI] came up with that structure,” says Pushmeet Kohli, one of AlphaFold’s inventors who now heads Google DeepMind’s “AI for Science” team. “But once it has made the structure, it is actually a foundation that now, the whole scientific community can build on top of.”…

…Pangu-Weather, an AI built by Huawei, a Chinese company, can make predictions about weather a week in advance thousands of times faster and cheaper than the current standard, without any meaningful dip in accuracy. FourCastNet, a model built by Nvidia, an American chipmaker, can generate such forecasts in less than two seconds, and is the first AI model to accurately predict rain at a high spatial resolution, which is important information for predicting natural disasters such as flash floods…

…One approach to fusion research involves creating a plasma (a superheated, electrically charged gas) of hydrogen inside a doughnut-shaped vessel called a tokamak. When hot enough, around 100m°C, particles in the plasma start to fuse and release energy. But if the plasma touches the walls of the tokamak, it will cool down and stop working, so physicists contain the gas within a magnetic cage. Finding the right configuration of magnetic fields is fiendishly difficult (“a bit like trying to hold a lump of jelly with knitting wool”, according to one physicist) and controlling it manually requires devising mathematical equations to predict what the plasma will do and then making thousands of small adjustments every second to around ten different magnetic coils. By contrast, an AI control system built by scientists at Google DeepMind and EPFL in Lausanne, Switzerland, allowed scientists to try out different shapes for the plasma in a computer simulation—and the AI then worked out how best to get there…

…“Super-resolution” AI models can enhance cheap, low-resolution electron-microscope images into high-resolution ones that would otherwise have been too expensive to record. The AI compares a small area of a material or a biological sample in high resolution with the same thing recorded at a lower resolution. The model learns the difference between the two resolutions and can then translate between them…

…Trained on vast databases of known drugs and their properties, models for “de novo molecular design” can figure out which molecular structures are most likely to do which things, and they build accordingly. Verseon, a pharmaceutical company based in California, has created drug candidates in this way, several of which are now being tested on animals, and one—a precision anticoagulant—that is in the first phase of clinical trials…

…If an LLM could be prompted with real (or fabricated) back stories so as to mirror accurately what human participants might say, they could theoretically replace focus groups, or be used as agents in economics research. LLMs could be trained with various different personas, and their behaviour could then be used to simulate experiments, whose results, if interesting, could later be confirmed with human subjects…

…Elicit, a free online AI tool created by Ought, an American non-profit research lab, can help by using an LLM to comb through the mountains of research literature and summarise the important ones much faster than any human could…

… But Dr Girolami warns that whereas AI might be useful to help scientists fill in gaps in knowledge, the models still struggle to push beyond the edges of what is already known. These systems are good at interpolation—connecting the dots—but less so at extrapolation, imagining where the next dot might go.

And there are some hard problems that even the most successful of today’s AI systems cannot yet handle. AlphaFold, for example, does not get all proteins right all the time. Jane Dyson, a structural biologist at the Scripps Research Institute in La Jolla, California, says that for “disordered” proteins, which are particularly relevant to her research, the AIs predictions are mostly garbage. “It’s not a revolution that puts all of our scientists out of business.” And AlphaFold does not yet explain why proteins fold in the ways they do. Though perhaps the AI “has a theory we just have not been able to grasp yet,” says Dr Kohli.

2. How Xi Jinping is taking control of China’s stock market – Hudson Lockett and Cheng Len

When Jilin Joinature Polymer made its debut on the Shanghai Stock Exchange on September 20, it became the 200th company to float on China’s domestic markets this year. Collectively they have raised over $40bn, more than double the amount raised on Wall Street and almost half the global total.

Yet the country’s benchmark CSI 300 index is down 14 per cent since January, having fallen by a fifth in 2022. It has underperformed other major markets such as Japan and the US, as worries mount about China’s slowing economic growth and a liquidity crisis in the real estate sector.

The highly unusual situation of a seemingly stagnant market welcoming hundreds of new companies is a consequence of significant policy shifts in Beijing that have ramped up over the past year. President Xi Jinping is intent on boosting investment into sectors that fit with his priorities for control, national security and technological self-sufficiency, and is using stock markets to direct that capital with the aim of reshaping China’s economy…

…Roughly a year ago, Xi told top leaders assembled in Beijing that China needed to mobilise a “new whole-nation system” to accelerate breakthroughs in strategic areas by “strengthening party and state leadership on major scientific and technological innovations, giving full play to the role of market mechanisms”.

That “new” in “new whole-nation system”, and the reference to “market mechanisms” distinguish Xi’s vision from that advanced under Mao Zedong, who ruled China from 1949 to 1976. Mao’s original “whole-nation system” entailed Soviet-style top-down economic planning, delivering technological advances including satellites and nuclear weapons, but not prosperity for the masses…

…Whereas Mao shut down China’s stock exchanges, Xi wants to use domestic equity markets to reduce dependence on property and infrastructure development to drive growth. But his “new whole-nation system” prioritises party policy above profit.

This helps explain why the party’s top cadres have been fast-tracking IPOs but remain reluctant to deploy large-scale property and infrastructure stimulus to reinvigorate economic growth. In their eyes, returning to the old playbook would only postpone an inevitable reckoning for debt-laden real estate developers and delay the planned transition to a new Chinese economy.

Key to that shift, Goldman’s Lau says, is getting companies in sectors such as semiconductor manufacturing, biotech and electric vehicles to go public. With stock market investors backing them, they can scale up and help drive the growth in consumer spending needed to fill the gap left behind by China’s downsized property market.

Xi’s administration was already channelling hundreds of billions of dollars from so-called government guidance funds into pre-IPO companies that served the state’s priorities. Now it is speeding up IPOs in Shanghai and Shenzhen while weeding out listings attempts by companies in low-priority sectors through the launch of two intertwined systems.

The nationwide “registration based” listings system, rolled out in February, made China’s formal process for stock market listings more transparent and ended an often lengthy process of official vetting by the China Securities Regulatory Commission for every IPO application.

Just as important is a behind-the-scenes “traffic light” system, in which regulators instruct Chinese investment banks informally on what kinds of companies should actually list. Companies such as beverage makers and café and restaurant chains get a “red light”, in effect prohibiting them from going public, whereas those in strategically important industries get a “green light”…

…Regulators have guarded against that risk by extending “lock-up” periods, during which Chinese investment banks and other institutional investors who participate in IPOs are not permitted to sell stock…

…Regulators have also restricted the ability of company insiders — be they directors, pre-IPO backers or so-called anchor investors — to sell their shares, especially if a company’s shares fall below their issue price or it fails to pay dividends to its shareholders.

The day after these changes were announced, at least 10 companies listed in Shanghai and Shenzhen cancelled planned share disposals by insiders. An analysis of the new rules’ impact by Tepon Securities showed that almost half of all listed companies in China now have at least some shareholders who cannot divest…

…With the market failing to respond in the way it once did, authorities are encouraging a wide range of domestic institutional investors to buy and hold shares in strategic sectors in order to prop up prices. The latest such move came earlier this month, when China’s insurance industry regulator lowered its designated risk level for domestic equities in an attempt to nudge normally cautious insurers to buy more stocks.

Such measures show that Xi’s stated plan to give “full play” to the role of markets comes with an important rider: those markets will take explicit and frequent direction from the party-state…

…Economists say that the tech sectors being favoured for listings by Beijing — semiconductors, EVs, batteries and other high-end manufacturing — are simply not capable of providing the scale of employment opportunity or driving the levels of consumer spending anticipated by top Chinese leaders.

“There’s two problems with focusing on investing in tech,” says Michael Pettis, a finance professor at Peking University and senior fellow at Carnegie China. “One is that tech is very small relative to what came before [from property and infrastructure], and two is that investing in tech doesn’t necessarily make you richer — it’s got to be economically sustainable.”

3. Higher Interest Rates Not Just for Longer, but Maybe Forever – Greg Ip

In their projections and commentary, some officials hint that rates might be higher not just for longer, but forever. In more technical terms, the so-called neutral rate, which keeps inflation and unemployment stable over time, has risen…

…The neutral rate isn’t literally forever, but that captures the general idea. In the long run neutral is a function of very slow moving forces: demographics, the global demand for capital, the level of government debt and investors’ assessments of inflation and growth risks.

The neutral rate can’t be observed, only inferred by how the economy responds to particular levels of interest rates. If current rates aren’t slowing demand or inflation, then neutral must be higher and monetary policy isn’t tight.

Indeed, on Wednesday, Fed Chair Jerome Powell allowed that one reason the economy and labor market remain resilient despite rates between 5.25% and 5.5% is that neutral has risen, though he added: “We don’t know that.”

Before the 2007-09 recession and financial crisis, economists thought the neutral rate was around 4% to 4.5%. After subtracting 2% inflation, the real neutral rate was 2% to 2.5%. In the subsequent decade, the Fed kept interest rates near zero, yet growth remained sluggish and inflation below 2%. Estimates of neutral began to drop. Fed officials’ median estimate of the longer-run fed-funds rate—their proxy for neutral—fell from 4% in 2013 to 2.5% in 2019, or 0.5% in real terms.

As of Wednesday, the median estimate was still 2.5%. But five of 18 Fed officials put it at 3% or higher, compared with just three officials in June and two last December…

…There are plenty of reasons for a higher neutral. After the global financial crisis, businesses, households and banks were paying down debt instead of borrowing, reducing demand for savings while holding down growth and inflation. As the crisis faded, so did the downward pressure on interest rates.

Another is government red ink: Federal debt held by the public now stands at 95% of gross domestic product, up from 80% at the start of 2020, and federal deficits are now 6% of GDP and projected to keep rising, from under 5% before the pandemic. To get investors to hold so much more debt probably requires paying them more. The Fed bought bonds after the financial crisis and again during the pandemic to push down long-term interest rates. It is now shedding those bondholdings…

…Inflation should not, by itself, affect the real neutral rate. However, before the pandemic the Fed’s principal concern was that inflation would persist below 2%, a situation that makes it difficult to stimulate spending and can lead to deflation, and that is why it kept rates near zero from 2008 to 2015. In the future it will worry more that inflation persists above 2%, and err on the side of higher rates with little appetite for returning to zero.  

Other factors are still pressing down on neutral, such as an aging world population, which reduces demand for homes and capital goods to equip workers. 

4. Confessions of a Viral AI Writer – Vauhini Vara

I kept playing with GPT-3. I was starting to feel, though, that if I did publish an AI-assisted piece of writing, it would have to be, explicitly or implicitly, about what it means for AI to write. It would have to draw attention to the emotional thread that AI companies might pull on when they start selling us these technologies. This thread, it seemed to me, had to do with what people were and weren’t capable of articulating on their own.

There was one big event in my life for which I could never find words. My older sister had died of cancer when we were both in college. Twenty years had passed since then, and I had been more or less speechless about it since. One night, with anxiety and anticipation, I went to GPT-3 with this sentence: “My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year.”

GPT-3 picked up where my sentence left off, and out tumbled an essay in which my sister ended up cured. Its last line gutted me: “She’s doing great now.” I realized I needed to explain to the AI that my sister had died, and so I tried again, adding the fact of her death, the fact of my grief. This time, GPT-3 acknowledged the loss. Then, it turned me into a runner raising funds for a cancer organization and went off on a tangent about my athletic life.

I tried again and again. Each time, I deleted the AI’s text and added to what I’d written before, asking GPT-3 to pick up the thread later in the story. At first it kept failing. And then, on the fourth or fifth attempt, something shifted. The AI began describing grief in language that felt truer—and with each subsequent attempt, it got closer to describing what I’d gone through myself.

When the essay, called “Ghosts,” came out in The Believer in the summer of 2021, it quickly went viral. I started hearing from others who had lost loved ones and felt that the piece captured grief better than anything they’d ever read. I waited for the backlash, expecting people to criticize the publication of an AI-assisted piece of writing. It never came. Instead the essay was adapted for This American Life and anthologized in Best American Essays. It was better received, by far, than anything else I’d ever written…

…Some readers told me “Ghosts” had convinced them that computers wouldn’t be replacing human writers anytime soon, since the parts I’d written were inarguably better than the AI-generated parts. This was probably the easiest anti-AI argument to make: AI could not replace human writers because it was no good at writing. Case closed.

The problem, for me, was that I disagreed. In my opinion, GPT-3 had produced the best lines in “Ghosts.” At one point in the essay, I wrote about going with my sister to Clarke Beach near our home in the Seattle suburbs, where she wanted her ashes spread after she died. GPT-3 came up with this:

We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.

My essay was about the impossibility of reconciling the version of myself that had coexisted alongside my sister with the one left behind after she died. In that last line, GPT-3 made physical the fact of that impossibility, by referring to the hand—my hand—that existed both then and now. I’d often heard the argument that AI could never write quite like a human precisely because it was a disembodied machine. And yet, here was as nuanced and profound a reference to embodiment as I’d ever read. Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life…

…Heti and other writers I talked to brought up a problem they’d encountered: When they asked AI to produce language, the result was often boring and cliché-ridden. (In a New York Times review of an AI-generated novella, Death of an Author, Dwight Garner dismissed the prose as having “the crabwise gait of a Wikipedia entry.”) Some writers wanted to know how I’d gotten an early-generation AI model to create poetic, moving prose in “Ghosts.” The truth was that I’d recently been struggling with clichés, too, in a way I hadn’t before. No matter how many times I ran my queries through the most recent versions of ChatGPT, the output would be full of familiar language and plot developments; when I pointed out the clichés and asked it to try again, it would just spout a different set of clichés.

I didn’t understand what was going on until I talked to Sil Hamilton, an AI researcher at McGill University who studies the language of language models. Hamilton explained that ChatGPT’s bad writing was probably a result of OpenAI fine-tuning it for one purpose, which was to be a good chatbot. “They want the model to sound very corporate, very safe, very AP English,” he explained. When I ran this theory by Joanne Jang, the product manager for model behavior at OpenAI, she told me that a good chatbot’s purpose was to follow instructions. Either way, ChatGPT’s voice is polite, predictable, inoffensive, upbeat. Great characters, on the other hand, aren’t polite; great plots aren’t predictable; great style isn’t inoffensive; and great endings aren’t upbeat…

…Sims acknowledged that existing writing tools, including Sudowrite’s, are limited. But he told me it’s hypothetically possible to create a better model. One way, he said, would be to fine-tune a model to write better prose by having humans label examples of “creative” and “uncreative” prose. But it’d be tricky. The fine-tuning process currently relies on human workers who are reportedly paid far less than the US minimum wage. Hiring fine-tuners who are knowledgeable about literature and who can distinguish good prose from bad could be cost-prohibitive, Sims said, not to mention the problem of measuring taste in the first place.

Another option would be to build a model from scratch—also incredibly difficult, especially if the training material were restricted to literary writing. But this might not be so challenging for much longer: Developers are trying to build models that perform just as well with less text.

If such a technology did—could—exist, I wondered what it might accomplish. I recalled Zadie Smith’s essay “Fail Better,” in which she tries to arrive at a definition of great literature. She writes that an author’s literary style is about conveying “the only possible expression of a particular human consciousness.” Literary success, then, “depends not only on the refinement of words on a page, but in the refinement of a consciousness.”

Smith wrote this 16 years ago, well before AI text generators existed, but the term she repeats again and again in the essay—“consciousness”—reminded me of the debate among scientists and philosophers about whether AI is, or will ever be, conscious. That debate fell well outside my area of expertise, but I did know what consciousness means to me as a writer. For me, as for Smith, writing is an attempt to clarify what the world is like from where I stand in it.

That definition of writing couldn’t be more different from the way AI produces language: by sucking up billions of words from the internet and spitting out an imitation. Nothing about that process reflects an attempt at articulating an individual perspective. And while people sometimes romantically describe AI as containing the entirety of human consciousness because of the quantity of text it inhales, even that isn’t true; the text used to train AI represents only a narrow slice of the internet, one that reflects the perspective of white, male, anglophone authors more than anyone else. The world as seen by AI is fatally incoherent. If writing is my attempt to clarify what the world is like for me, the problem with AI is not just that it can’t come up with an individual perspective on the world. It’s that it can’t even comprehend what the world is…

…I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended…

…Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,

Merritt Island’s delight,

Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself…

…It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation…

…The fact that AI writing technologies seem more useful for people who buy books than for those who make them isn’t a coincidence: The investors behind these technologies are trying to recoup, and ideally redouble, their investment. Selling writing software to writers, in that context, makes about as much sense as selling cars to horses.

5. ‘Defending the portfolio’: buyout firms borrow to prop up holdings – Antoine Gara and Eric Platt

Buyout firms have turned to so-called net asset value (NAV) loans, which use a fund’s investment assets as collateral. They are deploying the proceeds to help pay down the debts of individual companies held by the fund, according to private equity executives and senior bankers and lenders to the industry.

By securing a loan against a larger pool of assets, private equity firms are able to negotiate lower borrowing costs than would be possible if the portfolio company attempted to obtain a loan on its own.

Last month Vista Equity Partners, a private equity investor focused on the technology industry, used a NAV loan against one of its funds to help raise $1bn that it then pumped into financial technology company Finastra, according to five people familiar with the matter.

The equity infusion was a critical step in convincing lenders to refinance Finastra’s maturing debts, which included $4.1bn of senior loans maturing in 2024 and a $1.25bn junior loan due in 2025.

Private lenders ultimately cobbled together a record-sized $4.8bn senior private loan carrying an interest rate above 12 per cent. The deal underscores how some private equity firms are working with lenders to counteract the surge in interest rates over the past 18 months…

…While it was unclear what rate Vista had secured on its NAV loan, it is below a 17 per cent second-lien loan some lenders had pitched to Finastra earlier this year.

Executives in the buyout industry said NAV loans often carried interest rates 5 to 7 percentage points over short-term rates, or roughly 10.4 to 12.4 per cent today…

…The Financial Times has previously reported that firms including Vista, Carlyle Group, SoftBank and European software investor HG Capital have turned to NAV loans to pay out dividends to the sovereign wealth funds and pensions that invest in their funds, or to finance acquisitions by portfolio companies.

The borrowing was spurred by a slowdown in private equity fundraising, takeovers and initial public offerings that has left many private equity firms owning companies for longer than they had expected. They have remained loath to sell at cut-rate valuations, instead hoping the NAV loans will provide enough time to exit their investments more profitably.

But as rising interest rates now burden balance sheets and as debt maturities in 2024 and 2025 grow closer, firms recently have quietly started using the loans more “defensively”, people involved in recent deals told the FT…

…Relying on NAV loans is not without its risks.

Private equity executives who spoke to the FT noted that the borrowings effectively used good investments as collateral to prop up one or two struggling businesses in a fund. They warned that the loans put the broader portfolio at risk and the borrowing costs could eventually hamper returns for the entire fund…

…Executives in the NAV lending industry said that most new loans were still being used to fund distributions to the investors in funds. One lender estimated that 30 per cent of new inquiries for NAV loans were for “defensive” deals.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com