What We’re Reading (Week Ending 13 July 2025)

What We’re Reading (Week Ending 13 July 2025) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 13 July 2025):

1. Jim Chanos on the Nuttiness of ‘Bitcoin Treasury Companies’ | Odd Lots (Transcript Here) – Tracy Alloway, Joe Weisenthal, and Jim Chanos

Joe: All right, first question: Are Bitcoin treasury companies the stupidest thing you’ve ever seen in your entire life?

Jim Chanos: It’s rarely, rarely that I have to increase my personal security after a podcast which I had to do after our last podcast together when I said some intemperate things about Bitcoin treasury companies.

Here’s the thing. I get people very agitated about this and they point out just what a genius idea this is and I keep trying to point out to them I’m doing the same thing that guys like Michael Saylor are doing. I’m on the same side of the trade and I keep pointing out to my critics, “You’re on the opposite side of that trade and you don’t want to be on the opposite side of the trade, and the Bitcoin treasury paradox being that you are the one buying the pieces of paper that have infinite supply so that Michael Saylor and I can buy the digital asset with the limited supply and it makes kind of no sense.” So what will inevitably happen is happening, in that there’s nothing proprietary here – this is just simply raising capital to buy a financial asset and other companies will do this. In fact even since the podcast we last did, I think the number of companies that have announced this strategy is scores more. I think there’s over a hundred in the US and over 200 globally now…

…Jim: Because there’s a wonderful sales job that’s being done about the fact that this is an economic engine in and of itself, therefore terms like Bitcoin Yield are are used and I’ve called them financial gibberish – because they are. In fact, this will get arbed away ultimately by companies that will do this to try to capture that spread. In the case of Micro Strategy, it’s substantial. It’s still $50 billion, something like that, of the difference between the value of the enterprise value of the company and the value of their Bitcoin holdings. But the thing that really shot me into orbit on all this was when Saylor and others then said, “You can’t really value us on an NAV basis, a so-called MNAV, multiple of NAV. You actually have to also give us additional value for the amount of profit that we make every quarter from the appreciation in the asset.” I said, “Well that’s like saying my whole net worth is in a house that’s worth $400,000 that is now worth $500,000 a year or two later, and my net worth is not $500,000 now – it’s $2.5 million because it’s the value of the house plus a multiple on the increase in the profitability of the asset.”…

…Tracy: I have one more question why did Micro – I have to remember to call them Strategy but I can’t bring myself to do it. Why did they switch from issuing the convertible debt to preferred shares?

Jim: Because he realized that as he began to issue more and more common, it was putting pressure on the premium. Now the latest iteration is, “We’re going to do this quasi equity security, quasi debt, preferred stock and then we can lever up the the balance sheet.” This is a company whose selling point a year ago was “We’re not going to lever, because we have this wonderful equity that we can issue at a premium.” Now they’re saying, “Maybe if it trades above 2x we’ll issue equity, but if it’s between 1x and 2x, we’ll do preferred, and then if it’s below 1x we’ll buy back common and then what is Chanos going to do?” To which I said, “I’ll be out of the trade by then.” If it’s 1x NAV it’s not a trade. That’s the latest game plan – but stay tuned, it’ll change, I think. The narrative keeps changing…

…Jim: The legacy data centers – and there’s only a couple companies in the United States that really have legacy data centers. There’s Equinix, there’s Digital Realty, and then there’s old Colony Capital – it’s now called Digital Bridge and they own these things in fund format.

When we took a look at this with our partner back in ‘22 the idea was pretty simple. We did not see the AI explosion in mid-’22, but the idea was it was a pretty crummy business then, working on the cloud and SaaS demand. But it became a really bad business with the advent of AI because it just moved the hyperscalers to invest more in state-of-the-art data centers. These are older data centers that we’re short, the idea being that the new GPU-centric data centers need liquid cooling – they basically need all the infrastructure ripped out and replaced – and the business was not a high return on capital business before this. It’s getting even worse now.

What Equinix said yesterday at their Analyst Day was that revenues were not going to quite be what people thought they would be, but more ominously, capex was going to keep increasing. That’s what we’ve been saying, that these are not like warehouses where you just collect a check. These are actually operating businesses where you have to service the servers, you have to make sure there’s redundancy. It’s a business, a tech business, and they’re traded as REITs and that was the opportunity. That was the dichotomy in valuation. People added back the depreciation as they do with REITs and they valued them on a so-called FFO or AFFO, which is a cash flow metric. But in fact, unlike warehouses, shopping centers to a lesser extent, office buildings, the capex was real. Depreciation was a real expense. To give you an example, with Equinix yesterday, they said “Our capex is now going to bump up to between $4 billion and $5 billion a year.” The problem is their EBITDA this year is expected to be $4.5 billion, so all of that’s going to go to capex, meaning they’re going to have to basically borrow or issue equity to pay their interest and dividends. That’s just a definition of a bad business and it’s a business that’s not growing very fast. Unlike other really true AI companies which are growing 25%, 30%, 40% a year, these guys are growing 3%, 5%, 6% sort of with GDP. So there’s no growing their way out of this. So they’re just really bad businesses trading at just nosebleed valuations.

Tracy: On the topic of idiosyncratic opportunities I got to ask about Carvana because when my husband and I moved back to the States in 2022, we bought a used car through Carvana and that was a mistake. It took us about 6 months to actually get the car and they lost all our paperwork and it was just an absolute nightmare. I thought at the time this is a company whose entire business model is basically built on regulation, that’s what they’re doing and I thought they’re not going to have a future if they are this bad at it. Yet the stock is up.

Joe: It’s done insanely well.

Jim: It’s done a double round trip. It crashed 99% and now it’s up 100x, so it’s pretty interesting again. The reason it’s interesting is that if you go through the numbers, they are making more than 100% of their pre-tax profit from gain on sale of subprime loans and gain on sale of equity stakes in other companies. You ex those two out, they’re losing money and they’re losing money now right after the rebound, after the restructuring from 2022-2023. This is a company that is being valued again as a secular growth stock that saw its used car revenues drop 30% between 2022 and 2023, so it’s not necessarily a secular growth company. The accounting is abysmal. What people are really missing is that what’s happening in subprime auto securizations right now – and you can track it on your Bloomberg terminal – delinquencies are starting to skyrocket.

Tracy: We actually did an episode on this recently with Jim Egan.

Jim: So a huge amount of their profits comes from generating paper from customers and then selling it into the open market or to affiliates. This is a company that was spun out of a company called Drive Time Finance, which is their affiliated finance company which was originally called Ugly Duckling in the late ‘90s which was run by the current CEO’s father. That company collapsed in the first subprime blowup which was not the GFC – it was actually in the late ‘90s in subprime auto credit and consumer loans. It didn’t go bankrupt but it came close. He had to restructure it. He bought it in private and then restructured it, renamed it Drive Time Finance. But that’s the genesis of Carvana. That’s its DNA. It’s basically a subprime finance lead company, if you will. Those companies should not trade at 40x and 50x expected earnings – and they don’t by and large. They’re consumer finance companies. So it’s an odd bird. It’s still heavily leveraged, the stock is up a ton.

But what really got us interested again recently was the vast amount of insider selling that has just started in May and June in the company. If you go look at the insider selling in the company, it is just now a torrent of everybody selling pretty much every day. We just don’t think that’s a good sign given what’s happening in the subprime securization market…

…Jim: Every once in a while. There’s one other thing though I do want to mention. I was talking to someone earlier today and I think one of the things that’s underappreciated by investors right now and one of the things that’s been most interesting to me is how corporate profit margins have held up, which used to be very mean-reverting as you know. The more work we’ve done on this, the more we’re kind of convinced that the capital spending boom we’re seeing due to tech and specifically AI, is is looking very much akin to the global internet buildout networking buildout in the late ‘90s and the problem there of course is that if you buy my chips from NVIDIA or you were buying my networking equipment at Cisco and Lucent, that’s revenue for me and profit. But for you it’s a capitalized expense, it’s written off over time, and that adds a big, big boost until people pull their orders. That’s what we saw in 2001, 2002 that GDP dropped about 1% to 2% in the recession of ‘01-’02. Does anybody know what corporate profits did in that? That was an investment-driven recession. Consumers didn’t feel it at all. Earnings were down about 45% I think from peak to trough in the S&P. They were down about the same, a little bit more in the global financial crisis, but of course GDP collapsed.

Here’s a little interesting thought experiment. Right now NVIDIA’s revenues are about one-half of 1% of US GDP, about $140 billion and our GDP is about $29 trillion. Anyone tell me what Cisco and Lucent – the two companies that you needed when building out your internet network in ‘99, 2000 – did anybody know what their combined revenues as a percent of GDP was in 2000?

Tracy: No using your phones.

Joe: And ChatGPT.

Jim: It was a half a percent. It was roughly $50 billion total on GDP of $10 trillion. So those revenues stopped growing at some point shortly thereafter and actually shrunk a little bit. The investment boom we’re seeing right now, we’ve seen before. And it’s not just chips. It’s Caterpillar, it’s people building the data centers, it’s people building new utilities. There is an ecosystem around the AI boom that is considerable, as there was for TMT back in ‘99 and 2000. But it is a riskier revenue stream because if people pull back, they can pull back capex very easily, projects can get put on hold for six months or nine months, and that immediately shows up in disappointing revenues and earnings forecast if it happens. We’re not there yet but that’s one of the risks out there that I think a lot of people are underestimating.

2. Creating therapeutic abundance – Jacob Kimmel

Jack Scannell infamously predicted in 2012 that the number of drugs per billion dollars would decline two-fold every nine years. Unfortunately, our therapeutics industry has largely followed through…

…Drug program success rates are equally complex. Failures can be attributed to safety issues, failure of a drug to hit the desired biological target, or improper selection of the target for a given disease…

…We can bucket the failures into a two broad categories of safety and efficacy and make informed estimates.

1. Safety failures – ~20-30% of all candidates A molecule was developed, but proved unsafe in patients. These are typically detected as failures in Phase 1 trials.

2. Efficacy failures – 70-80% of all candidates The remainder of all drug candidates that fail – 63% of all drugs placed into trials period – fail due to a lack of efficacy. Even though the drugs are safe, they don’t provide benefit to the patients by treating their disease.

From these coarse numbers, it’s clear that the highest leverage point in our drug development process is increasing the efficacy rate of new candidate medicines…

…Efficacy failures can broadly occur for two reasons:

  1. Engagement failures: We chose the right biology (“target”) to manipulate, but our drug candidate failed to achieve the desired manipulation. This is the closest thing drug development has to an engineering problem.
  2. Target failures: The drug candidate manipulated our chosen biology exactly as expected. Unfortunately, the target failed to have the desired effect on the disease. This is a scientific or epistemic failure, rather than an engineering problem. We simply failed to understand the biology well enough to intervene and benefit patients.

It’s difficult to know exactly the exact frequency of these two failure modes, but we can infer from a few sources that target failures dominate.

  • Success rates for biosimilar drugs hitting known targets are extremely high, >80%
  • Drugs against targets with genetic evidence have a 2-3 fold higher success rate than those against targets lacking this evidence, suggesting that picking good targets is a high source of leverage
  • Among organizations with meaningful internal data, picking the right target is considered the first priority of all programs (e.g. “Right target” is the first tenet of AstraZeneca’s “5Rs” framework).

The predominance of target failures has likewise led most companies working on new modalities to address a small set of targets with well-validated biology. This has led to dozens of potential medicines “crowding” on the same targets, and this trend is increasing over time…

…If searching for targets is the limiting reagent in our medicine production function, the difficulty of finding targets must increase over time in order to explain part of Eroom’s law. How could this be the case given all the improvements in underlying biomedical science?

In an influential paper “Are ideas getting harder to find?”, Nicholas Bloom and colleagues argue that many fields of invention suffer from diminishing returns to investment. Intuitively, the low hanging fruit in a given discipline is picked early and more investment is required merely to reap the same harvest from higher branches on the tree of ideas…

…Targets are getting harder to find not because we are getting worse at selection, but because many of the easy and obvious therapeutic hypotheses have already been exploited….

…While promising, human genetics can only reveal a certain class of targets. The larger the effect size of a genetic variant, the less frequently it appears in the population due to selective pressure. In effect, this means that the largest effects in biology are the least likely to be discovered using human genetics. Many of the best known targets have minimal genetic signal for this reason.

Our current methods are good at discovering individual genes that associate with health, but discovering combinations of genes is nascent at best. Human genetics cannot help us discover the combinatorial medicines or gene circuits to install in a cell therapy…

…Even with the best possible experimental methods, some of the most promising target biologies will never be searched exhaustively. There are a nearly infinite number of combinatorial genetic interventions we might drug, synthetic circuits we might engineer into cells, and changes in tissue composition we might engender.

Artificial intelligence models can learn general models from the data generated in functional genomics experiments of many flavors, predicting outcomes for the experiments we haven’t yet run. If we manage to construct a performant model for a given class of target biologies, we may be able to increase the efficiency of target discovery by many orders-of-magnitude. The cost of discovering a target could conceivably go from >$1B to <$1M.

There’s growing interest in the idea of combining these technologies to build “virtual cells,” models that can predict the outcomes of target discovery experiments in silico before they’re ever executed in the lab. The grand version of this vision spans all possible target biologies, from gene inhibitions to polypharmaceutical small molecule treatments. In the maximal form, it may take many years to realize.

More limited realizations though are tractable today. The initial versions of these models are already emerging within early Predictive Biology companies. As a few examples, Recursion is building models of genetic perturbations in cancer cells, Tahoe Tx is building models in oncology with a chemical biology approach, and NewLimit has developed models for reprogramming cell age across human cell types13. Focused models like these represent an early demonstration that this general approach can yield therapeutic value…

…We are entering an epoch of abundant intelligence. With these tools, we have the opportunity to discover & design target biologies at a rate that’s too cheap to meter. The therapies that emerge could serve as the counterexample that downgrades Eroom’s law to a historic conjecture.

3. What I learned watching 78 videos from Tesla’s Austin robotaxis – Timothy B. Lee

I’ve watched 78 videos posted by pro-Tesla influencers who got early access to the service. Those videos documented more than 16 hours of driving time across nearly 100 rides.

These videos exceeded my expectations. Tesla’s robotaxi rollout wasn’t perfect, but it went as well as anyone could have expected. A handful of minor glitches got outsized attention online, but a large majority of trips were completed without incident…

…Tesla’s robotaxis drove flawlessly during the vast majority of the 16 hours of driving footage I watched. They stayed in their lane, followed traffic laws, and interacted smoothly with other vehicles…

…Tesla’s most widely discussed error occurred around seven minutes into this video. The robotaxi approached an intersection and got into the left turn lane. But the robotaxi couldn’t make up its mind whether it wanted to turn left or go straight. The car’s steering wheel jerked back and forth several times. On the car’s display, the blue ribbon showing the car’s intended path jumped back and forth erratically between turning left or continuing straight. Finally, the Tesla decided to proceed straight but ended up driving the wrong way in the opposite left turn lane…

…But in a piece last year, I argued that they were misunderstanding the situation.

“Tesla hasn’t started driverless testing because its software isn’t ready,” I wrote. “For now, geographic restrictions and remote assistance aren’t needed because there’s always a human being behind the wheel. But I predict that when Tesla begins its driverless transition, it will realize that safety requires a Waymo-style incremental rollout.”

That’s exactly what’s happened:

  • Just as Waymo launched its fully driverless service in 50 square miles near Phoenix in 2020, so Tesla launched its robotaxi service in about 30 square miles of Austin last month.
  • Across 16 hours of driving, I never saw Tesla’s robotaxi drive on a freeway or go faster than 43 miles per hour. Waymo’s maximum speed is currently 50 miles per hour.
  • Tesla has built a teleoperation capability for its robotaxis. One job posting last year advertised for an engineer to develop this capability. It stated that “our remote operators are transported into the device’s world using a state-of-the-art VR rig that allows them to remotely perform complex and intricate tasks.”

The launch of Tesla’s robotaxi service in Austin is a major step toward full autonomy. But the Austin launch also makes it clear that Tesla hasn’t discovered an alternative path for testing and deploying driverless vehicles. Instead, Tesla is following the same basic deployment strategy Waymo pioneered five to seven years ago.

Of course, this does not necessarily mean that Tesla will scale up its service as slowly as Waymo has. It took almost five years for Waymo to expand from its first commercial service (Phoenix in 2018) to its second (San Francisco in 2023). The best informed Tesla bulls acknowledge that Waymo is currently in the lead but believe Tesla is positioned to expand much faster than Waymo did…

…Last month, Waymo published a study demonstrating that self-driving software benefits from the same kind of “scaling laws” that have driven progress in large language models.

“Model performance improves as a power-law function of the total compute budget,” the Waymo researchers wrote. “As the training compute budget grows, optimal scaling requires increasing the model size 1.5x as fast as the dataset size.”

When Waymo published this study, Tesla fans immediately seized on it as a vindication of Tesla’s strategy. Waymo trained its experimental models using 500,000 miles of driving data harvested from Waymo safety drivers driving Waymo vehicles. That’s a lot of data by most standards, but it’s far less than the data Tesla could potentially harvest from its fleet of customer-owned vehicles…

…I posed this question to Dragomir Anguelov, the head of Waymo’s AI foundations team and a co-author of Waymo’s new scaling paper. He argued that the paper’s implications are more complicated than Tesla fans think.

“We are not driving a data center on wheels and you don’t have all the time in the world to think,” Anguelov told me in a Monday interview. “Under these fairly important constraints, how much you can scale and what are the optimal ways of scaling is limited.”

Anguelov also pointed to an issue that will be familiar to anyone who read last month’s explainer on reinforcement learning.

Waymo’s scaling paper—like OpenAI’s famous 2020 scaling law paper—focused on models trained with imitation learning…

…Anguelov was a co-author of a 2022 Waymo paper finding that self-driving models trained with a combination of imitation and reinforcement learning tend to perform better than models trained only with imitation learning.

Imitation learning is “not the most sophisticated thing you can do,” Anguelov told me. “Imitation learning has a lot of limitations.”

This is significant because demonstration data from human drivers—the kind of data Tesla has in abundance—isn’t very helpful for reinforcement learning. Reinforcement learning works by having a model try to solve a task and then judging whether it succeeded. For self-driving, this can mean having a model “drive” in simulation and then judging whether it caused a collision or other problems. Or it can mean running the software on real cars and having a safety driver intervene if the model makes a mistake. In either case, it’s not obvious that having vast amounts of human driving data is especially helpful.

One finding from that 2022 paper is particularly relevant for thinking about the performance of Tesla’s robotaxis. The Waymo researchers noted that models trained only with imitation learning tend to drive well in common situations but make mistakes in “more unusual or dangerous situations that occur only rarely in the data.”

In other words, if you rely too much on imitation learning, you can end up with a model that drives like an expert human most of the time but occasionally makes catastrophic mistakes…

…Since its 2018 launch, Waymo has acknowledged that it has remote operators who sometimes provide real-time assistance to its vehicles. But Waymo has also said that these remote operators never drive the vehicles in real time. Instead, they provide high-level feedback, while the vehicle always remains in control of second-by-second decisions.

In contrast, Tesla’s job posting stated that teleoperators can be “transported into the device’s world” so that they can “remotely perform complex and intricate tasks.” Could those “complex and intricate tasks” include driving the car for seconds or even minutes at a time?

In the videos I watched, a number of Tesla’s early customers commented on how human-like Tesla’s driving was. That might just be a tribute to the quality of Tesla’s AI model. But it’s also possible that sometimes a human driver is literally driving the vehicle from a remote location.

4. No Bad Risks, Only Bad Rates — And Other Lessons From National Indemnity Founder Jack Ringwalt – Kingswell

There are no bad risks in insurance — only bad rates

This maxim was Ringwalt’s north star, the iron-clad principle that allowed him to fearlessly pursue unusual and unwanted risks without driving himself right out of business. Almost anything can be intelligently insured, so long as you charge enough for the coverage.

(It’s also reminiscent of one of my favorite Warren Buffett lines. “I can go into an emergency ward and write life insurance,” he said in 1990, “if you let me charge enough of a premium.”)

When evaluating potential opportunities, Ringwalt’s open mind welcomed the weird and the wild — and he wrote many policies on offbeat ventures that others wouldn’t touch with a ten-foot pole. But, when it came to pricing, that flexibility vanished. If the market would not meet his rate, Ringwalt never blinked. He just waved goodbye to the deal with an indifferent shrug.

“When business is unprofitable to the companies in general,” wrote Ringwalt, “our premium volume has taken a very sharp spurt and when business has been profitable for most companies, we have run into very unintelligent competition and have had to cut down temporarily on our writings.”

The insurance merry-go-round is always the same: profitability lures rivals who slash rates to grab market share, only to crater when losses inevitably pile up. And when the industry bleeds, fly-by-night competitors vanish, prices climb back to normal, and the cycle starts spinning anew. “This pattern will keep repeating,” he wrote. “It makes no sense, but it’s human nature.”

Ringwalt steadfastly refused to play that sucker’s game — a tradition that continued under Berkshire’s aegis. From 1986 to 1999, National Indemnity’s revenue nosedived 85% as profitable premiums evaporated. But, rather than succumb to the pressure to write more business at any price, Buffett and co. urged employees to wait patiently for the right pitch (so to speak). Some things never change.

5. Why I don’t think AGI is right around the corner – Dwarkesh Patel

Sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the internet. I disagree. I think the LLMs of today are magical. But the reason that the Fortune 500 aren’t using them to transform their workflows isn’t because the management is too stodgy. Rather, I think it’s genuinely hard to get normal humanlike labor out of LLMs. And this has to do with some fundamental capabilities these models lack…

…But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.

This just wouldn’t work. No matter how well honed your prompt is, no kid is just going to learn how to play saxophone from just reading your instructions. But this is the only modality we as users have to ‘teach’ LLMs anything…

…When we do solve continuous learning, we’ll see a huge discontinuity in the value of the models. Even if there isn’t a software only singularity (with models rapidly building smarter and smarter successor systems), we might still see something that looks like a broadly deployed intelligence explosion. AIs will be getting broadly deployed through the economy, doing different jobs and learning while doing them in the way humans can. But unlike humans, these models can amalgamate their learnings across all their copies. So one AI is basically learning how to do every single job in the world. An AI that is capable of online learning might functionally become a superintelligence quite rapidly without any further algorithmic progrss…

…But here are the timelines where I’d take a 50/50 bet:

  • AI can do taxes end-to-end for my small business as well as a competent general manager could in a week: including chasing down all the receipts on different websites, finding all the missing pieces, emailing back and forth with anyone we need to hassle for invoices, filling out the form, and sending it to the IRS: 2028 I think we’re in the GPT 2 era for computer use. But we have no pretraining corpus, and the models are optimizing for a much sparser reward over a much longer time horizon using action primitives they’re unfamiliar with. That being said, the base model is decently smart and might have a good prior over computer use tasks, plus there’s a lot more compute and AI researchers in the world, so it might even out. Preparing taxes for a small business feels like for computer use what GPT 4 was for language. It took 4 years to get from GPT 2 to GPT 4. Just to clarify, I am not saying that we won’t have really cool computer use demos in 2026 and 2027 (GPT-3 was super cool, but not that practically useful). I’m saying that these models won’t be capable of end-to-end handling a week long and quite involved project which involves computer use.
  • AI learns on the job as easily, organically, seamlessly, and quickly as a human, for any white collar work. For example, if I hire an AI video editor, after six months, it has as much actionable, deep understanding of my preferences, our channel, what works for the audience, etc as a human would: 2032 While I don’t see an obvious way to slot in continuous online learning into current models, 7 years is a long time! GPT 1 had just come out this time 7 years ago. It doesn’t seem implausible to me that over the next 7 years, we’ll find some way for models to learn on the job.

You might react, “Wait you made this huge fuss about continual learning being such a handicap. But then your timeline is that we’re 7 years away from what would at minimum be a broadly deployed intelligence explosion.” And yeah, you’re right. I’m forecasting a pretty wild world within a relatively short amount of time.

AGI timelines are very lognormal. It’s either this decade or bust. (Not really bust, more like lower marginal probability per year – but that’s less catchy).AI progress over the last decade has been driven by scaling training compute of frontier systems (over 4x a year). This cannot continue beyond this decade, whether you look at chips, power, even fraction of raw GDP used on training. After 2030, AI progress has to mostly come from algorithmic progress. But even there the low hanging fruit will be plucked (at least under the deep learning paradigm). So the yearly probability of AGI craters.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (the company behind Waymo), and Tesla. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com