What We’re Reading (Week Ending 08 October 2023)

What We’re Reading (Week Ending 08 October 2023) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 08 October 2023):

1. The Road to Self-Renewal – John Gardner

We build our own prisons and serve as our own jail keepers, but I’ve concluded that our parents and the society at large have a hand in building our prisons. They create roles for us – and self-images – that hold us captive for a long time. The individual who is intent on self-renewal will have to deal with ghosts of the past – the memory of earlier failures, the remnants of childhood dramas and rebellions, accumulated grievances and resentments that have long outlived their cause. Sometimes people cling to the ghosts with something almost approaching pleasure, but the hampering effect on growth is inescapable. As Jim Whitaker, who climbed Mount Everest, said, “You never conquer the mountain. You only conquer yourself.”

The more I see of human lives, the more I believe the business of growing up is much longer drawn out than we pretend. If we achieve it in our 30s, even our 40s, we’re doing well…

…The things you learn in maturity aren’t simple things such as acquiring information and skills. You learn not to engage in self-destructive behavior. You learn not to burn up energy in anxiety. You discover how to manage your tensions. You learn that self-pity and resentment are among the most toxic of drugs. You find that the world loves talent but pays off on character.

You come to understand that most people are neither for you nor against you; they are thinking about themselves. You learn that no matter how hard you try to please, some people in this world are not going to love you, a lesson that is at first troubling and then really quite relaxing…

…Of course failures are a part of the story, too. Everyone fails. When Joe Louis was world heavyweight boxing champion, he said, “Everyone has to figure to get beat some time.” The question isn’t did you fail, but did you pick yourself up and move ahead. And there is one other little question: “Did you collaborate in your own defeat?” A lot of people do. Learn not to.

One of the enemies of sound, lifelong motivation is a rather childish conception we have of the kind of concrete, describable goal toward which all of our efforts drive us. We want to believe that there is a point at which we can feel we have arrived. We want a scoring system that tells us when we’ve piled up enough points to count ourselves successful.

So you scramble and sweat and climb to reach what you thought was the goal. When you get to the top you stand up and look around, and chances are you feel a little empty. Maybe more than a little empty. You may wonder whether you climbed the wrong mountain.

But the metaphor is all wrong. Life isn’t a mountain that has a summit. Nor is it, as some suppose, a riddle that has an answer. Nor a game that has a final score.

Life is an endless unfolding and, if we wish it to be, an endless process of selfdiscovery, an endless and unpredictable dialogue between our own potentialities and the life situations in which we find ourselves. By potentialities I mean not just success as the world measures success, but the full range of one’s capacities for learning, sensing, wondering, understanding, loving and aspiring…

…There’s something I know about you that you may or may not know about yourself. You have within you more resources of energy than have ever been tapped, more talent than has ever been exploited, more strength than has ever been tested, more to give than you have ever given…

…There is not perfection of techniques that will substitute for the lift of spirit and heightened performance that comes from strong motivation. The world is moved by highly motivated people, by enthusiasts, by men and women who want something very much or believe very much…

…If I may offer you a simple maxim, “Be interested.” Everyone wants to be interesting but the vitalizing thing is to be interested. Keep a sense of curiosity. Discover new things. Care. Risk failure. Reach out…

…We cannot dream of a Utopia in which all arrangements are ideal and everyone is flawless. Life is tumultuous – an endless losing and regaining of balance, a continuous struggle, never an assured victory. Nothing is ever finally safe. Every important battle is fought and refought. You may wonder if such a struggle, endless and of uncertain outcome, isn’t more than humans can bear. But all of history suggests that the human spirit is well fitted to cope with just that kind of world…

…Meaning is not something you stumble across, like the answer to a riddle or the prize in a treasure hunt. Meaning is something you build into your life. You build it out of your own past, out of your affections and loyalties, out of the experience of humankind as it is passed on to you, out of your own talent and understanding, out of the things you believe in, out of the things and people you love, out of the values for which you are willing to sacrifice something. The ingredients are there. You are the only one who can put them together into that unique pattern that will be your life. Let it be a life that has dignity and meaning for you. If it does, then the particular balance of success or failure is of less account.

2. AI can help to speed up drug discovery — but only if we give it the right data – Marissa Mock, Suzanne Edavettal, Christopher Langmead & Alan Russell

There is a troubling crunch point in the development of drugs made from proteins. Fewer than 10% of such drug candidates succeed in clinical trials1. Failure at this late stage of development costs between US$30 million and $310 million per clinical trial, potentially costing billions of dollars per drug, and wastes years of research while patients wait for a treatment.

More protein drugs are needed. The large size and surface area of proteins mean that medicines made from them have more ways to interact with target molecules, including proteins in the body that are involved in disease, compared with drugs based on smaller molecules. Protein-based drugs therefore have broad potential as therapeutics.

For instance, protein drugs such as nivolumab and pembrolizumab can prevent harmful interactions between tumour proteins and receptor proteins on immune cells that would deactivate the immune system. Small-molecule drugs, by contrast, are not big enough to come between the two proteins and block the interaction…

…Because proteins can have more than one binding domain, therapeutics can be designed that attach to more than one target — for instance, to both a cancer cell and an immune cell4. Bringing the two together ensures that the cancer cell is destroyed.

To unblock the drug-development bottleneck, computer models of how protein drugs might act in the body must be improved. Researchers need to be able to judge the dose that drugs will work at, how they will interact with the body’s own proteins, whether they might trigger an unwanted immune response, and more.

Making better predictions about future drug candidates requires gathering large amounts of data about why previous ones succeeded or failed during clinical trials. Data on many hundreds or thousands of proteins are needed to train effective machine-learning models. But even the most productive biopharmaceutical companies started clinical trials for just 3–12 protein therapeutics per year, on average, between 2011 and 2021 (see go.nature.com/3rclacp). Individual pharmaceutical companies, such as ours (Amgen in Thousand Oaks, California), cannot amass enough data alone.

Incorporation of artificial intelligence (AI) into drug-development pipelines can help. It offers an opportunity for competing companies to merge data while protecting their commercial interests. Doing so can improve developers’ predictive abilities, benefiting both the firms and the patients…

… Until about five years ago, developing a candidate required several cycles of protein engineering to turn a natural protein into a working drug5. Proteins were selected for a desired property, such as an ability to bind to a particular target molecule. Investigators made thousands of proteins and rigorously tested them in vitro before selecting one lead candidate for clinical trials. Failure at any stage meant starting the process from scratch.

Biopharmaceutical companies are now using AI to speed up drug development. Machine-learning models are trained using information about the amino-acid sequence or 3D structure of previous drug candidates, and about properties of interest. These characteristics can be related to efficacy (which molecules the protein bind to, for instance), safety (does it bind to unwanted molecules or elicit an immune response?) or ease of manufacture (how viscous is the drug at its working concentration?).

Once trained, the AI model recognizes patterns in the data. When given a protein’s amino-acid sequence, the model can predict the properties that the protein will have, or design an ‘improved’ version of the sequence that it estimates will confer a desired property. This saves time and money trying to engineer natural proteins to have properties, such as low viscosity and a long shelf life, that are essential for drugs. As predictions improve, it might one day become possible for such models to design working drugs from scratch…

…In short, this fusion of cutting-edge life science, high-throughput automation and AI — known as generative biology — has drastically improved drug developers’ ability to predict a protein’s stability and behaviour in solution. Our company now spends 60% less time than it did five years ago on developing a candidate drug up to the clinical-trial stage…

…Here’s how federated learning could work for biopharmaceutical companies. A trusted party — perhaps a technology firm or a specialized consulting company — would maintain a ‘global’ model, which could initially be trained using publicly available data. That party would send the global model to each participating biopharmaceutical company, which would update it using the firm’s own data to create a new ‘local’ model. The local models would be aggregated by the trusted party to produce an updated global model. This process could be repeated until the global model essentially stopped learning new patterns…

…With active learning, an algorithm determines the training data that would be needed to make more-reliable predictions about this type of unusual amino-acid sequence. Rather than developers having to guess what extra data they need to generate to improve their model, they can build and analyse only proteins with the requested amino-acid sequences.

Active learning is already being used by biopharmaceutical companies. It should now be combined with federated learning to improve predictions — particularly for more-complex properties, such as how a protein’s sequence or structure determines its interactions with the immune system.

3. China Isn’t Shifting Away From the Dollar or Dollar Bonds – Brad W. Setser

There is a widespread perception that China has responded to an era of heightened geostrategic competition and growing economic rivalry with the United States by shifting its foreign exchange reserves out of the dollar…

…It sort of makes sense – China does worry about the weaponization of the dollar and the reach of U.S. financial sanctions. And why would a rising power like China want to fund the Treasury of a country that China views as standing in the way of the realization of the China dream (at least in the Pacific).

It also seems to be in the official U.S. data – China’s reported holdings of U.S. Treasuries have slid pretty continuously since 2012, with a further down leg in the last 18 months…

…Yet, that is not what I believe is actually happening.

Strange as it may seem, the best evidence available suggests that the dollar share in China’s reserves has been broadly stable since 2015 (if not a bit before). If a simple adjustment is made for Treasuries held by offshore custodians like Belgium’s Euroclear, China’s reported holdings of U.S. assets look to be basically stable at between $1.8 and $1.9 trillion. After netting out China’s substantial holdings of U.S. equities, China’s holdings of U.S. bonds, after adjusting for China’s suspected Euroclear custodial account, have consistently been around 50 percent of China’s reported reserves. Nothing all that surprising.

The bulk of China’s post-2012 efforts to diversify its reserves have come not from shifting reserves out of the dollar, but rather by using what could have been reserves to support the Belt and Road and the outward expansion of Chinese firms (see Box 6 of SAFE’s annual report, or my June blog). Those non-reserve foreign assets, strangely enough, seem to be mostly in dollars even if aren’t invested in the United States; almost all the documented Belt and Road project loans, for example, have been in dollars.

There are, obviously, two sources of data about China’s reserves – China’s own (limited) disclosure, and the U.S. data on foreign holdings of U.S. securities. Both broadly tell the same story – one at odds with most pressure coverage of the slide in China’s formal reserves.

China has disclosed that it reduced the dollar share of its reported reserves from 79 percent in 2005 to 58 percent in 2015. It also disclosed that the dollar share in 2017 remained at 58 percent (see SAFE’s 2021 annual report). China’s disclosed dollar share is just below the global dollar share in the IMF’s comprehensive data set…

…Journalists the world over generally know only one part of the U.S. Treasury international Capital (TIC) data – the table showing foreign holdings of U.S. Treasuries in U.S. custodians (FRBNY, State Street, Bank of New York, J.P. Morgan). That table reports the current market value of China’s treasuries in U.S. custodians, so the recent fall reflects, among other things, the general sell-off in long-term U.S. Treasuries and resulting slide in the market value of Treasuries purchased in years past.

That table, however, suffers from three other limitations:

One, Treasuries held by non-U.S. custodians wouldn’t register as “China” in the U.S. data. The two biggest custodians are Euroclear, which is based in Belgium (Russia kept its euro reserves there), and Clearstream, which is based in Luxembourg.

And two, the table for Treasuries (obviously) doesn’t include China’s holdings of U.S. assets other than Treasuries – and China actually has a large portfolio of Agency bonds and U.S. equities (they appear in another more difficult to use data table).

The U.S. data would also miss Treasuries and other U.S. assets that have been handed over to third parties to manage – and it is well known that SAFE has accounts at the large global bond funds, several hedge funds (including Bridgewater) and in several private equity funds…

…China historically has been a big buyer of Agencies: few now remember, but China held more Agencies than Treasuries going into the global financial crisis (see the Survey data for end June 2008)

After the Freddie and Fannie scare (read Paulson’s memoirs) China let its Agency portfolio run off, and China shied away from Agencies during the years when the Fed was a big buyer. But with the Federal Reserve stepping back from the Agency market once it stopped buying U.S. assets, the yield on Agencies soared – and China very clearly moved back into the Agency market.

The Federal Reserve staff turns the reported custodial holdings into an estimate of actual purchases by adjusting for mark to market changes in bond valuation. In 2022, China bought $84 billion of Agencies. It added another $18 billion in the first 6 months of 2023 – so purchases of over $100 billion in the last 18 months of data. After adjusting for Belgium, China is estimated to have sold only about $40 billion in Treasuries over the last 18 months (it bought around $40 billion in 2022, and reduced its holdings by around $80 billion in the first 6 months of 2023 – with most of reduction coming in January 2023)…

…Bottom line: the only interesting evolution in China’s reserves in the past six years has been the shift into Agencies. That has resulted in a small reduction in China’s Treasury holdings – but it also shows that it is a mistake to equate a reduction in China’s Treasury holdings with a reduction in the share of China’s reserves held in U.S. bonds or the U.S. dollar.

4. Mark Zuckerberg on Threads, the future of AI, and Quest 3 – Alex Heath and Nilay Patel

A lot of the conversation around social media is around information and the utility aspect, but I think an equally important part of designing any product is how it makes you feel, right? What’s the kind of emotional charge of it, and how do you come away from that feeling?

I think Instagram is generally kind of on the happier end of the spectrum. I think Facebook is sort of in the middle because it has happier moments, but then it also has sort of harder news and things like that that I think tend to just be more critical and maybe, you know, make people see some of the negative things that are going on in the world. And I think Twitter indexes very strongly on just being quite negative and critical.

I think that that’s sort of the design. It’s not that the designers wanted to make people feel bad. I think they wanted to have a maximum kind of intense debate, right? Which I think that sort of creates a certain emotional feeling and load. I always just thought you could create a discussion experience that wasn’t quite so negative or toxic. I think in doing so, it would actually be more accessible to a lot of people. I think a lot of people just don’t want to use an app where they come away feeling bad all the time, right? I think that there’s a certain set of people who will either tolerate that because it’s their job to get that access to information or they’re just warriors in that way and want to be a part of that kind of intellectual combat. 

But I don’t think that that’s the ubiquitous thing, right? I think the ubiquitous thing is people want to get fresh information. I think there’s a place for text-based, right? Even when the world is moving toward richer and richer forms of sharing and consumption, text isn’t going away. It’s still going to be a big thing, but I think how people feel is really important.

So that’s been a big part of how we’ve tried to emphasize and develop Threads. And, you know, over time, if you want it to be ubiquitous, you obviously want to be welcome to everyone. But I think how you seed the networks and the culture that you create there, I think, ends up being pretty important for how they scale over time. 

Where with Facebook, we started with this real name culture, and it was grounded to your college email address. You know, it obviously hasn’t been grounded to your college email address for a very long time, but I think the kind of real authentic identity aspect of Facebook has continued and continues to be an important part of it.

So I think how we set the culture for Threads early on in terms of being a more positive, friendly place for discussion will hopefully be one of the defining elements for the next decade as we scale it out. We obviously have a lot of work to do, but I’d say it’s off to quite a good start. Obviously, there’s the huge spike, and then, you know, not everyone who tried it out originally is going to stick around immediately. But I mean, the monthly active’s and weekly’s, I don’t think we’re sharing stats on it yet…

.. This hasn’t happened yet with Threads, but you’re eventually going to hook it into ActivityPub, which is this decentralized social media protocol. It’s kind of complicated in layman’s terms, but essentially, people run their own servers. So, instead of having a centralized company run the whole network, people can run their own fiefdoms. It’s federated. So Threads will eventually hook into this. This is the first time you’ve done anything really meaningful in the decentralized social media space. 

Yeah, we’re building it from the ground up. I’ve always believed in this stuff.

Really? Because you run the largest centralized social media platform. 

But I mean, it didn’t exist when we got started, right? I’ve had our team at various times do the thought experiment of like, “Alright, what would it take to move all of Facebook onto some kind of decentralized protocol?” And it’s like, “That’s just not going to happen.” There’s so much functionality that is on Facebook that it’s way too complicated, and you can’t even support all the different things, and it would just take so long, and you’d not be innovating during that time. 

I think that there’s value in being on one of these protocols, but it’s not the only way to deliver value, so the opportunity cost of doing this massive transition is kind of this massive thing. But when you’re starting from scratch, you can just design it so it can work with that. And we want to do that with this because I thought that that was one of the interesting things that’s evolving around this kind of Twitter competitive space, and there’s a real ecosystem around that, and I think it’s interesting.

What does that mean for a company like yours long term if people gravitate more toward these decentralized protocols over time? Where does a big centralized player fit into that picture?

Well, I guess my view is that the more that there’s interoperability between different services and the more content can flow, the better all the services can be. And I guess I’m just confident enough that we can build the best one of the services, that I actually think that we’ll benefit and we’ll be able to build better quality products by making sure that we can have access to all of the different content from wherever anyone is creating it.

And I get that not everyone is going to want to use everything that we build. I mean, that’s obviously the case when it’s like, “Okay, we have 3 billion people using Facebook,” but not everyone wants to use one product, and I think making it so that they can use an alternative but can still interact with people on the network will make it so that that product also is more valuable.

I think that can be pretty powerful, and you can increase the quality of the product by making it so that you can give people access to all the content, even if it wasn’t created on that network itself. So, I don’t know. I mean, it’s a bet.

There’s kind of this funny counterintuitive thing where I just don’t think that people like feeling locked into a system. So, in a way, I actually think people will feel better about using our products if they know that they have the choice to leave.

If we make that super easy to happen… And obviously, there’s a lot of competition, and we do “download your data” on all our products, and people can do that today. But the more that’s designed in from scratch, I think it really just gives creators, for example, the sense that, “Okay, I have…” 

Agency.

Yeah, yeah. So, in a way, that actually makes people feel more confident investing in a system if they know that they have freedom over how they operate. Maybe for phase one of social networking, it was fine to have these systems that people felt a little more locked into, but I think for the mature state of the ecosystem, I don’t think that that’s going to be where it goes.

I’m pretty optimistic about this. And then if we can build Threads on this, then maybe over time, as the standards get more built out, it’s possible that we can spread that to more of the stuff that we’re doing. We’re certainly working on interop with messaging, and I think that’s been an important thing. The first step was kind of getting interop to work between our different messaging systems. 

Right, so they can talk to each other. 

Yeah, and then the first decision there was, “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on.

But that basically has just meant completely rewriting Messenger and Instagram direct from scratch. So you’re basically going from a model where all the messages are stored in the cloud to completely inverting the architecture where now all the messages are stored locally and just the way…

While the plane’s in the air.

Yeah, that’s been a kind of heroic effort by just like a hundred or more people over a multiyear period. And we’re basically getting to the point where it’s starting to roll out now.

Now that we’re at the point where we can do encryption across those apps, we can also start to support more interop.

With other services that Meta doesn’t own?

Well, I mean, the plan was always to start with interop between our services, but then get to that. We’re starting to experiment with that, too…

I think Llama and the Llama 2 release has been a big thing for startups because it is so free or just easy to use and access. I’m wondering, was there ever debate internally about “should we take the closed route?” You know, you’ve spent so much money on all this AI research. You have one of the best AI labs in the world, I think it’s safe to say. You have huge distribution — why not keep it all to yourself? You could have done that.

You know, the biggest arguments in favor of keeping it closed were generally not proprietary advantage.

Or competitive advantage?

No, it wasn’t competitive advantage. There was a fairly intense debate around this.

Did you have to be dissuaded? Did you know we have to have it open?

My bias was that I thought it should be open, but I thought that there were novel arguments on the risks, and I wanted to make sure we heard them all out, and we did a very rigorous process. We’re training the next version of Llama now, and I think we’ll probably have the same set of debates around that and how we should release it. And again, I sort of, like, lean toward wanting to do it open source, but I think we need to do all the red teaming and understand the risks before making a call.

But the two big arguments that people had against making Llama 2 open were one: it takes a lot of time to prepare something to be open. Our main business is basically building consumer products, right? And that’s what we’re launching at Connect. Llama 2 is not a consumer product. It’s the engine or infrastructure that powers a bunch of that stuff. But there was this argument — especially after we did this partial release of Llama 1 and there was like a lot of stir around that, then people had a bunch of feedback and were wondering when we would incorporate that feedback — which is like, “Okay, well, if we release Llama 2, is that going to distract us from our real job, which is building the best consumer products that we can?” So that was one debate. I think we got comfortable with that relatively quickly. And then the much bigger debate was around the risk and safety.

It’s like, what is the framework for how you measure what harm can be done? How do you compare that to other things? So, for example, someone made this point, and this was actually at the Senate event. Someone made this point that’s like, “Okay, we took Llama 2, and our engineers in just several days were able to take away the safeguards and ask it a question — ‘Can you produce anthrax?’ — and it answered.” On its face, that sounds really bad, right? That’s obviously an issue that you can strip off the safeguards until you think about the fact that you can actually just Google how to make anthrax and it shows up on the first page of the results in five seconds, right?

So there’s a question when you’re thinking through these things about what is the actual incremental risk that is created by having these different technologies. We’ve seen this in protecting social media as well. If you have, like, Russia or some country trying to create a network of bots or, you know, inauthentic behavior, it’s not that you’re ever going to stop them from doing it. It’s an economics problem. You want to make it expensive enough for them to do that that it is no longer their best strategy because it’s cheaper for them to go try to exploit someone else or something else, right? And I think the same is true here. So, for the risk on this, you want to make it so that it’s sufficiently expensive that it takes engineers several days to dismantle whatever safeguards we built in instead of just Googling it.

You feel generally good directionally with the safety work on that?

For Llama 2, I think that we did leading work on that. I think the white paper around Llama 2, where we basically outlined all the different metrics and all the different things that we did, and we did internal red teaming and external red teaming, and we’ve got a bunch of feedback on it. So, because we went into this knowing that nothing is going to be foolproof — some bad actor is going to be able to find some way to exploit it — we really knew that we needed to create a pretty high bar on that. So, yeah, I felt good about that for Llama 2, but it was a very rigorous process…

… But one of the things that I think is interesting is these AI problems, they’re so tightly optimized that having the AI basically live in the environment that you’re trying to get it to get better at is pretty important. So, for example, you have things like ChatGPT — they’re just in an abstract chat interface. But getting an AI to actually live in a group chat, for example, it’s actually a completely different problem because now you have this question of, “Okay, when should the AI jump in?”

In order to get an AI to be good at being in a group chat, you need to have experience with AIs and group chats, which, even though Google or OpenAI or other folks may have a lot of experience with other things, that kind of product dynamic of having the actual experience that you’re trying to deliver the product in, I think that’s super important.

Similarly, one of the things that I’m pretty excited about: I think multimodality is a pretty important interaction, right? A lot of these things today are like, “Okay, you’re an assistant. I can chat with you in a box. You don’t change, right? It’s like you’re the same assistant every day,” and I think that’s not really how people tend to interact, right? In order to make things fresh and entertaining, even the apps that we use, they change, right? They get refreshed. They add new features.

And I think that people will probably want the AIs that they interact with, I think it’ll be more exciting and interesting if they do, too. So part of what I’m interested in is this isn’t just chat, right? Chat will be where most of the interaction happens. But these AIs are going to have profiles on Instagram and Facebook, and they’ll be able to post content, and they’ll be able to interact with people and interact with each other, right?

There’s this whole interesting set of flywheels around how that interaction can happen and how they can sort of evolve over time. I think that’s going to be very compelling and interesting, and obviously, we’re kind of starting slowly on that. So we wanted to build it so that it kind of worked across the whole Meta universe of products, including having them be able to, in the near future, be embodied as avatars in the metaverse, right?

So you go into VR and you have an avatar version of the AI, and you can talk to them there. I think that’s gonna be really compelling, right? It’s, at a minimum, creating much better NPCs and experiences when there isn’t another actual person who you want to play a game with. You can just have AIs that are much more realistic and compelling to interact with.

But I think having this crossover where you have an assistant or you have someone who tells you jokes and cracks you up and entertains you, and then they can show up in some of your metaverse worlds and be able to be there as an avatar, but you can still interact with them in the same way — I think it’s pretty cool.

Do you think the advent of these AI personas that are way more intelligent will accelerate interest in the metaverse and in VR?

I think that all this stuff makes it more compelling. It’s probably an even bigger deal for smart glasses than for VR.

You need something. You need a kind of visual or a voice control?

When I was thinking about what would be the key features for smart glasses, I kind of thought that we were going to get holograms in the world, and that was one. That’s kind of like augmented reality. But then there was always some vague notion that you’d have an assistant that could do something.

I thought that things like Siri or Alexa were very limited. So I was just like, “Okay, well, over the time period of building AR glasses, hopefully the AI will advance.” And now it definitely has. So now I think we’re at this point where it may actually be the case that for smart glasses, the AI is compelling before the holograms and the displays are, which is where we got to with the new version of the Ray-Bans that we’re shipping this year, right? When we started working on the product, all this generative AI stuff hadn’t happened yet.

So we actually started working on the product just as an improvement over the first generation so that the photos are better, the audio is a lot better, the form factor is better. It’s a much more refined version of the initial product. And there’s some new features, like you can livestream now, which is pretty cool because you can livestream what you’re looking at.

But it was only over the course of developing the product that we realized that, “Hey, we could actually put this whole generative AI assistant into it, and you could have these glasses that are kind of stylish Ray-Ban glasses, and you could be talking to AI all throughout the day about different questions you have.”

This isn’t in the first software release, but sometime early next year, we’re also going to have this multimodality. So you’re gonna be able to ask the AI, “Hey, what is it that I’m looking at? What type of plant is that? Where am I? How expensive is this thing?”

Because it has a camera built into the glasses, so you can look at something like, “Alright, you’re filming with some Canon camera. Where do I get one of those?” I think that’s going to be very interesting.

Again, this is all really novel stuff. So I’m not pretending to know exactly what the key use cases are or how people are going to use that. But smart glasses are very powerful for AI because, unlike having it on your phone, glasses, as a form factor, can see what you see and hear what you hear from your perspective.

So if you want to build an AI assistant that really has access to all of the inputs that you have as a person, glasses are probably the way that you want to build that. It’s this whole new angle on smart glasses that I thought might materialize over a five- to 10-year period but, in this odd twist of the tech industry, I think actually is going to show up maybe before even super high-quality holograms do…

It seems like you all, based on my demos, still primarily think of it as a gaming device. Is that fair? That the main use cases for Quest 3 are going to be these kinds of “gaming meets social.” So you’ve got Roblox now.

I think social is actually the first thing, which is interesting because Quest used to be primarily gaming. And now, if you look at what experiences are people spending the most time in, it’s actually just different social metaverse-type experiences, so things like Rec Room, VRChat, Horizon, Roblox. Even with Roblox just kind of starting to grow on the platform, social is already more time spent than gaming use cases. It’s different if you look at the economics because people pay more for games. Whereas social kind of has that whole adoption curve thing that I talked about before, where, first, you have to kind of build out the big community, and then you can enable commerce and kind of monetize it over time.

This is sort of my whole theory for VR. People looked at it initially as a gaming device. I thought, “Hey, I think this is a new computing platform overall. Computing platforms tend to be good for three major things: gaming, social and communication, and productivity. And I’m pretty sure we can nail the social one. If we can find the right partners on productivity and if we can support the gaming ecosystem, then I think that we can help this become a big thing.”

Broadly, that’s on track. I thought it was going to be a long-term project, but I think the fact that social has now overtaken gaming as the thing that people are spending the most time on is an interesting software evolution in how they’re used. But like you’re saying: entertainment, social, gaming — still the primary things. Productivity, I think, still needs some time to develop…

I reported on some comments you made to employees after Apple debuted the Vision Pro, and you didn’t seem super phased by it. It seemed like it didn’t bother you as much as it maybe could have. I have to imagine if they released a $700 headset, we’d be having a different conversation. But they’re shipping low volume, and they’re probably three to four years out from a general, lower-tier type release that’s at any meaningful scale. So is it because the market’s yours foreseeably then for a while?

Apple is obviously very good at this, so I don’t want to be dismissive. But because we’re relatively newer to building this, the thing that I wasn’t sure about is when Apple released a device, were they just going to have made some completely new insight or breakthrough that just made our effort…

Blew your R&D up?

Yeah, like, “Oh, well, now we need to go start over.” I thought we were doing pretty good work, so I thought that was unlikely, but you don’t know for sure until they show up with their thing. And there was just nothing like that.

There are some things that they did that are clever. When we actually get to use it more, I’m sure that there are going to be other things that we’ll learn that are interesting. But mostly, they just chose a different part of the market to go in.

I think it makes sense for them. I think that they sell… it must be 15 to 20 million MacBooks a year. And from their perspective, if they can replace those MacBooks over time with things like Vision Pro, then that’s a pretty good business for them, right? It’ll be many billions of dollars of revenue, and I think they’re pretty happy selling 20 million or 15 million MacBooks a year.

But we play a different game. We’re not trying to sell devices at a big premium and make a ton of money on the devices. You know, going back to the curve that we were talking about before, we want to build something that’s great, get it to be so that people use it and want to use it like every week and every day, and then, over time, scale it to hundreds of millions or billions of people.

If you want to do that, then you have to innovate, not just on the quality of the device but also in making it affordable and accessible to people. So I do just think we’re playing somewhat different games, and that makes it so that over time, you know, they’ll build a high-quality device and in the zone that they’re focusing on, and it may just be that these are in fairly different spaces for a long time, but I’m not sure. We’ll see as it goes. 

From the developer perspective, does it help you to have developers building on… you could lean too much into the Android versus iOS analogy here, but yeah, where do you see that going? Does Meta really lean into the Android approach and you start licensing your software and technology to other OEMs?

I’d like to have this be a more open ecosystem over time. My theory on how these computing platforms evolve is there will be a closed integrated stack and a more open stack, and there have been in every generation of computing so far. 

The thing that’s actually not clear is which one will end up being the more successful, right? We’re kind of coming off of the mobile one now, where Apple has truly been the dominant company. Even though there are technically more Android phones, there’s way more economic activity, and the center of gravity for all this stuff is clearly on iPhones.

In a lot of the most important countries for defining this, I think iPhone has a majority and growing share, and I think it’s clearly just the dominant company in the space. But that wasn’t true in computers and PCs, so our approach here is to focus on making it as affordable as possible. We want to be the open ecosystem, and we want the open ecosystem to win.

So I think it is possible that this will be more like PCs than like mobile, where maybe Apple goes for a kind of high-end segment, and maybe we end up being the kind of the primary ecosystem and the one that ends up serving billions of people. That’s the outcome that we’re playing for…

That’s why I asked. Because I think people are wondering, “Where’s all this going?” 

At the end of the day, I’m quite optimistic about both augmented and virtual reality. I think AR glasses are going to be the thing that’s like mobile phones that you walk around the world wearing.

VR is going to be like your workstation or TV, which is when you’re like settling in for a session and you want a kind of higher fidelity, more compute, rich experience, then it’s going to be worth putting that on. But you’re not going to walk down the street wearing a VR headset. At least I hope not — that’s not the future that we’re working toward.

But I do think that there’s somewhat of a bias — maybe this in the tech industry or maybe overall — where people think that the mobile phone one, the glasses one, is the only one of the two that will end up being valuable.

But there are a ton of TVs out there, right? And there are a ton of people who spend a lot of time in front of computers working. So I actually think the VR one will be quite important, too, but I think that there’s no question that the larger market over time should be smart glasses.

Now, you’re going to have both all the immersive quality of being able to interact with people and feel present no matter where you are in a normal form factor, and you’re also going to have the perfect form factor to deliver all these AI experiences over time because they’ll be able to see what you see and hear what you hear.

So I don’t know. This stuff is challenging. Making things small is also very hard. It’s this fundamentally kind of counterintuitive thing where I think humans get super impressed by building big things, like the pyramids. I think a lot of time, building small things, like cures for diseases at a cellular level or miniaturizing a supercomputer to fit into your glasses, are maybe even bigger feats than building some really physically large things, but it seems less impressive for some reason. It’s super fascinating stuff.

I feel like every time we talk, a lot has happened in a year. You seem really dialed in to managing the company. And I’m curious what motivates you these days. Because you’ve got a lot going on, and you’re getting into fighting, you’ve got three kids, you’ve got the philanthropy stuff — there’s a lot going on. And you seem more active in day-to-day stuff, at least externally, than ever. You’re kind of the last, I think, founder of your era still leading the company of this large. Do you think about that? Do you think about what motivates you still? Or is it just still clicking, and it’s more subconscious?

I’m not sure that that much of the stuff that you said is that new. I mean, the kids are seven years old, almost eight now, so that’s been for a while. The fighting thing is relatively new over the last few years, but I’ve always been very physical.

We go through different waves in terms of what the company needs to be doing, and I think that that calls for somewhat different styles of leadership. We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style.

And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs, and that required a somewhat different style. But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company. But I don’t know. I think these things evolve over time.

5. Rising Loan Costs Are Hurting Riskier Companies – Eric Wallerstein

Petco took out a $1.7 billion loan two years ago at an interest rate around 3.5%. Now it pays almost 9%.

Interest costs for the pet-products retailer surged to nearly a quarter of free cash flow in this year’s second quarter. Early in 2021, when Petco borrowed the money, those costs were less than 5% of cash flow…

… Petco isn’t alone. Many companies borrowed at ultralow rates during the pandemic through so-called leveraged loans. Often used to fund private-equity buyouts—or by companies with low credit ratings—this debt has payments that adjust with the short-term rates recently lifted by the Federal Reserve.

Now, interest costs in the $1.7 trillion market are biting and Fed officials are forecasting that they will stay high for some time.

Nearly $270 billion of leveraged loans carry weak credit profiles and are potentially at risk of default, according to ratings firm Fitch. Conditions have deteriorated as the Fed has raised rates, beginning to show signs of stress not seen since the onset of the Covid-19 pandemic. Excluding a 2020 spike, the default rate for the past 12 months is the highest since 2014…

…“So far, borrowers have done a good job of managing increased interest costs as the economy has held up better than many expected at the start of the year,” said Hussein Adatia, who manages portfolios of stressed and distressed corporate credit for Dallas-based Westwood. “The No. 1 risk to leveraged loans is if we get a big slowdown in the economy.”…

…According to the Fed’s senior-loan-officer survey, banks are becoming more stringent about whom they are willing to lend to, making it more difficult for low-rated companies to refinance. Fitch expects about $61 billion of those loans to default in the next two years, the “overwhelmingly majority of which” are anticipated by the end of 2023.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Apple and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com