What We’re Reading (Week Ending 15 December 2024)

What We’re Reading (Week Ending 15 December 2024) -

Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 15 December 2024):

1. SpaceX: Rocket Ship – Matt Reustle and Luke Ward

Luke

So if we take the CapEx part of that first, NASA estimated that the cost to develop the Falcon 9 from scratch would be about $4 billion. But SpaceX ended up doing it for about a tenth of that price. So to begin with, that’s an order of magnitude improvement in the level of investment required.

SpaceX gives you the prices for launches on their website. So $70 million per launch of a Falcon 9 flight—that’s already 20 times cheaper than the Space Shuttle was per kilogram into orbit. But the real kicker, as you point out, is the operating leverage that comes from having partial reusability…

…Starship is designed to be fully and rapidly reusable. So unlike Falcon 9, which is only partially reusable but also able to fly multiple times every day, it’s going to have a payload capacity that’s about 100 tons to orbit at the beginning, but probably rising to closer to 200 tons to orbit over time.

And Musk has suggested that a variable cost of around $10 million per launch is the ballpark figure which they’d be aiming for at scale in a steady state, ambitiously maybe even falling to $2 million—a figure which has been touted. If you believe those kinds of performance levels are feasible, that gets the cost down to around $10 per kilogram. That’s over 100 times cheaper than the Falcon 9 we’re talking about at the moment. And that would have a dramatic effect on what’s economically feasible for humanity to do in space…

…Matt

Satellites in Low Earth Orbit—there is quite a bit of history in terms of that being the obvious space use case, that having an existing economy. I think Starlink is an extension of that. Different, absolutely, but an extension of what was going on.

Are there brand new industries being unlocked or obvious things with line of sight that open up from a space economy perspective that you see either today or, when I say near future, you could extend that out however far you think is reasonable.

Luke

A lot of these options which SpaceX has to develop, brand new markets that don’t exist already, are a function ultimately of the cost curve. Take semiconductor manufacturing on Earth; at the moment, we spend billions of dollars per fab to recreate the conditions which are readily accessible in space for free, if you can get there.

And so there’s some point on the cost curve intersecting between the cost of building a fab and the cost of launching a fab or the equipment of a fab into orbit and operating there instead. Same can be said of pharmaceutical research. The crystallization structures which are able to happen in space are different from the ones which are able to happen under the influence of gravity.

So if you think about pricing on pharmaceuticals, extending patent lives, etc., if you can move the manufacturing or the research lab for cutting-edge pharmaceuticals into space, you could make high-value, low-volume products. Something which would really make sense to do and doesn’t require a huge technological innovation to happen.

The list can go on and on—artificial organs, for example, being able to manufacture perfectly spherical lenses. There’s lots and lots of things which could be made.

Maybe the way to think about that is that space-based manufacturing could be the next large market for this if the costs can continue to come down. Starship having the volume of an A380 or a 747—think of the equivalent size of factory that represents. And if that can be launched every single day and recovered every single day for $10 per kilogram, that could be a really compelling way to do quite a lot of manufacturing.

Incidentally, that’s something that Jeff Bezos really focuses on in his vision for space as opposed to Mars per se, is where we can move a lot of the heavy-polluting industry off the planet. And why don’t we turn Earth into this perfect nature reserve, and all these polluting aspects of manufacturing can go into orbit, which again is very compelling.

Probably needs a lot more innovation to deliver communications from orbit, but I’d say it’s maybe an inevitability if the cost gets to a low enough point. You think how much solar energy is available without the atmospheric attenuation, for example—you know, 24/7. There’s lots of compelling reasons why if it’s cheap enough, at some point a lot of these things probably should happen, not just could happen.

Matt

The solar energy point, great example of something that is an entirely different dynamic in space than on Earth. What would the other things be? Just out of curiosity, when you mentioned semiconductors or pharmaceuticals, is it just purely gravity? Are there other things that are happening in space or not happening in space that happen on Earth that would drive that difference?

Luke

There’s the vacuum conditions—so there isn’t an atmosphere—so the level of impurities which you need to get rid of for a vapor deposition machine, for example. You don’t have the same kind of challenges there of having to have this deep vacuum.

Then, arguably, in space, because you don’t have gravity, you could construct much larger structures there rather than construct them on the ground and then launch them.

So again, that volume constraint which we were talking about earlier, in terms of how big your payload is—if you’re able to get enough stuff up there and assemble it in space, as we did with the International Space Station, things can be much, much larger given the payload bay of Starship than they could with the Space Shuttle.

Matt

When you think about low Earth orbit versus geosynchronous orbit versus something like Mars—which I think was the original vision with Elon and SpaceX—how much does that change the economics when you extend out?

Is it orders of magnitude where it’s an exponential cost curve to go further out? Even just if we focus on the launch and use a satellite for an example, before we get into all the manufacturing dynamics, is there any way to contextualize that from a cost perspective?

Luke

The really good news here is that gravitational force decreases with the square of distance. So the biggest challenge is getting off the surface and into orbit. Once you’re there, from an energy point of view, it’s a lot easier to go anywhere else in the solar system.

So if you were to take Falcon 9 again as the example, for the same price, it can place 20 tons into low Earth orbit, or it can place 4 tons into Martian orbit. That’s despite the latter being over a million times further away. Now, this feeds into what I think is probably the biggest misconception about SpaceX and its Mars ambitions.

I’d say for most people, the idea of a commercial entity pursuing exploration is naive at best. But I’d argue that long-term investors should be absolutely ecstatic about SpaceX having this mission as a forcing function. Firstly, it’s the key to getting the best people in the world to come and work for the organization and allow it to innovate in a manner and speed that others simply can’t match. That’s a huge competitive advantage.

Secondly, the way to get more cargo to Mars is actually about figuring out how to get more cargo into orbit around Earth, because that’s where the cost is all concentrated. It’s all in that first initial leap off the surface of our planet. So rather than framing Starship as a system that makes it possible to get to other planets, think about it instead being a system that could make it enormously more profitable to operate a business in Earth orbit and unlock brand new commercial use cases there as well…

…Luke

When we talk to SpaceX, they’re still very much focused on the here and now in the next couple of years. They have ambitions for things which they could do, but the focus is very much on the core business: serving the core customers, serving Starlink, getting Starship to launch status. We’ll deal with the next things next.

They’ve got so many things which they could be doing at the moment. When we come to this, a lot of this is us hypothesizing of how that could evolve beyond information which they’ve given us. The trend which you’ve seen of them to be vertical integrators could be quite informative. It might be that they end up being the ones who are commercializing a lot of these other services.

Rather than having a customer paying them for it at substantial scale, it would make more sense for them to do it. Could you start seeing some of these aspects? If they get into space-based manufacturing, for example, could that be priced on a value-added basis rather than a subscription basis or a volume basis? Certainly seems possible. If you start running data centers in space because it’s easier to power or cool them, etc., could you start offering data storage and machine learning alongside Starlink connectivity?

The further you look out, the more and more wacky it can get, but it’s also potentially financially plausible as well. You maybe have to take a bit of inspiration from science fiction here, but it’s quite a common trope in some of these movies of these large mega-corporations—the Weyland-Yutani Corporation from the Alien movies, or the Resources Development Administration from the Avatar films—where one mega-corporation was able to dominate access to space early on and then ends up controlling the entire extrasolar economy because of the advantages it had at that really early stage…

…Luke

The human spaceflight at the moment definitely has been the preserve of the rich and famous, but at scale that becomes cheaper and cheaper. And if we are talking about launching, Starship could be used as much for sending cargo and people to other points on the planet rather than other points in space. And so one option that the government’s looking into is this notion of rocket cargo delivery. Starship would be able to deliver 200,000 kg anywhere on the planet within 40 minutes.

What does that do for sort of a rapid reaction force, and what does that do for next-day delivery? At some stage, it’s going to be feasible to put a lot of astronauts or paying passengers on something like that, and it will be a quicker and potentially more efficient way to do long-distance travel. These things really could get quite wild, but it could be plausible at some stage. Again, that’s not the reason to invest in the company today; that’s not the basis of what they’re doing, and it’s a lot of people getting excited about things.

But come back in 10 years, I’d be disappointed if you or I weren’t able to go into space at some point in our lifetime for the cost of a premium economy ticket or something like that.

2. Japan vs Big Tech – Daye Deng

Put simply, US big tech has grown so dominant that it’s singlehandedly blowing a hole in the trade balance of a nation as large as Japan…

…In 2023, Japan recorded JPY 5.5 trillion in so-called digital trade deficit. The Ministry of International Trade and Industry (MITI) projects this to grow to JPY 8 trillion by 2030, at which point it could surpass Japan’s annual import of crude oil.

Japan’s total goods and services trade deficit in 2023 was JPY 6 trillion, with the digital deficit accounting for JPY 5.5 trillion…

…Japan has been in a structural deficit for goods trade over the past two decades. This may come as a surprise to those who have held onto the old idea that Japan is an export powerhouse.

There are several reasons for the shift:

  • Japanese firms have moved production overseas. This isn’t entirely negative since Japanese firms (and their profits) continue to grow, but it has contributed to a widening trade deficit.
  • Japan’s loss of global competitiveness in certain industries, like chips and appliances, to rivals such as South Korea.
  • Rising cost of imports driven by energy shocks, rising overseas inflation, and weak yen.

The third point deserves elaboration. Japan’s reliance on imported energy has long been a critical structural weakness. For example, following 2011 Fukushima nuclear disaster, Japan significantly reduced domestic nuclear energy production and increased its reliance on imported LNG, becoming a major contributor to trade deficit.

A similar pattern emerged post-Covid. Global oil and commodity prices surged. This was compounded by high rates of overseas inflation on general imports. On top, a historically weak yen made imports even more expensive…

…Since 2014, the Japanese government has been disclosing the digital deficit, which has grown 2.6-fold from 2014 to JPY 5.5 trillion in 2023. This is a net figure derived from JPY 9.2 trillion paid for digital services and JPY 3.7 trillion received from abroad…

…The picture is quite clear: on the services side, Japan is taking its hard-earned surplus from tourism and spending it all on paying for digital services.

How will this play out? While I’m personally bullish on the Japanese tourism industry, it still has natural growth constraints. However, there is no ceiling on how much Japan can continue to spend on digital services. In fact, digital services spend could accelerate given:

  • Japan is already playing catch-up in the digital realm, and is behind other major countries in many key digital metrics.
  • AI is poised to make Japan’s digital dependency crisis even worse, in a world where firms like Nvidia and those that are able to scale AI services (e.g. hyperscalers) dominate AI economics.

Without an AI champion of its own, Japan has few options if it wants to avoid being left behind in the new digital paradigm…

…Based on our discussion so far, does it surprise you that the Japanese yen has been weak?

“According to an analysis by Mizuho Research & Technologies, if the digital deficit doubles from the 2023 level by the end of March 2026, it will add another 5 to 6 yen of depreciation in the Japanese currency’s value against the dollar.”

– Nikkei Asian Review

Or let me put it another way — would you feel bullish about the currency of a country that relies on tourism as its primary growing surplus, while ultimately funneling all those earnings (and more) into paying for essential energy imports and ever-increasing digital spend on big tech?…

…In recent years we’ve seen how hard Japan has been trying to reclaim its position in the semiconductor industry. But do they only care about hardware and not its digital sovereignty? Will Japan continue to sit back and let US tech giants profit endlessly, or will it finally confront its position as a digital colony?

3. Guyana and the mystery of the largest ranch in the Americas – Swen Lorenz

Many mistakenly believe that Guyana is located in Africa – when it’s actually nestled right next to Venezuela…

…In 2015, ExxonMobil discovered oil off the coast of Guyana.

The discovery changed the course of the country. Long one of the poorest nations of the Western hemisphere, Guyana has since become the world’s fastest growing economy.

Since 2015, its GDP per capita has more than quintupled. In 2022 and 2023, its economy grew by 67% and 33%, respectively. Another stunner of a year is forecast for 2024, with 34% GDP growth.

The former British colony benefits from a large amount of oil wealth spread around a relatively small population of 800,000 people. Per head, there is twice as much oil as in Saudi Arabia. To put things in perspective, Guyana’s landmass is nearly as big as the UK, but it only has 1.2% of the UK’s population…

…Just a week ago, ExxonMobil reported that it had reached 500m barrels of oil produced in Guyana since output began in 2019. The goal is to lift production to 1.3m barrels per day by 2027, up from currently 650,000 barrels. In comparison, the UK’s North Sea produces just 1m barrels per day…

…Supporters of the country’s energy projects claim that they will bring untold riches to the population. Indeed, Guyana recently started to hand out cheques to its citizens, including the Guyanese diaspora of 400,000 people, who the government encourages to come back as it needs more labour to support the strong economic growth.

4. Capital, Compute & AI Scaling – Patrick O’Shaughnessy, Chetan Puttagunta, and Modest Proposal

Modest

Everyone knows the Mag 7 represent a larger percent of the S&P 500 today. But beyond that, I think thematically AI has permeated far broader into industrials, into utilities and really makes up, I would argue, somewhere between 40 and 45% of the market cap as a direct play on this. And if you even abstract to the rest of the world, you start bringing in ASML, you bring in TSMC, you bring in the entire Japanese chip sector. And so if you look at the cumulative market cap that is a direct play on artificial intelligence right now, it’s enormous…

… I think at the micro level this is a really powerful shift if we move from pre-training to inference time and there are a couple big ramifications.

One, it better aligns revenue generation and expenditures. I think that is a really, really beneficial outcome for the industry at large, which is in the pre-training world you were going to spend 20, 30, $40 billion on CapEx, train the model over 9 to 12 months, do post-training, then roll it out, then hope to generate revenue off of that in inference. In a test time compute scaling world you are now aligning your expenditures with the underlying usage of the model. So just from a pure efficiency and scalability on a financial side, this is much, much better for the hyperscalers.

I think a second big implication, again we have to say we don’t know that pre-training scaling is going to stop. But if you do see this shift towards inference time, I think that you need to start to think about how do you re-architect the network design? Do you need million chip super clusters in energy low-cost land locations or do you need smaller, lower-latency, more efficient inference-time data centers scattered throughout the country? And as you re-architect the network, the implications on power utilization, grid design?

A lot of the, I would say, narratives that have underpinned huge swaths of the investment world I think have to be rethought and I would say today because this is a relatively new phenomenon, I don’t believe that the public markets have started to grapple with what that potential new architecture looks like and how that may impact some of the underlying spend…

Chetan

But at the moment, at this plateauing time, we’re starting to see these small teams catch up to the frontier. And what I mean by frontier is where are the state-of-the-art models, especially around text performing? We’re seeing these small teams of quite literally two to five people jumping to the frontier with spend that is not one order, but multiple orders of magnitude less than what these large labs were spending to get there.

I think part of what’s happened is the incredible proliferation of open-source models. Specifically, what Meta’s been doing with LLaMA has been an extraordinary force here. LLaMA 3.1 comes in three flavors, 405 billion, 70 billion, 8 billion. And then LLaMA 3.2 comes in 1 billion, 3 billion, 11 billion, and 90 billion.

And you can take these models, download them, put them on a local machine, you can put them in a cloud, you can put them on a server, and you can use these models to distill, fine-tune, train on top of, modify, et cetera, et cetera, and catch up to the frontier with pretty interesting algorithmic techniques.

And because you don’t need massive amounts of compute, or you don’t need massive amounts of data, you could be particularly clever and innovative about a specific vertical space, or a specific technique, or a particular use case to jump to the frontier very, very quickly…

…Chetan

The force of Llama today has been two things, and I think this has been very beneficial to Meta is one. The transformer architecture that Llama is using is a sort of standard architecture, but it has its own nuances.

And if the entire developer ecosystem that’s building on top of Llama is starting to just assume that that Llama 3 transformer architecture is the foundational and sort of standard way of doing things, it’s sort of standardizing the entire stack towards this Llama way of thinking, all the way from how the hardware vendors will support your training runs to the hyperscalers and on and on and on. And so standardizing on Llama itself is starting to become more and more prevalent.

And so if you were to start a new model company, what ends up happening is starting with Llama today is not only great because Llama is open source, it’s also extraordinarily efficient because the entire ecosystem is standardizing on that architecture…

…Modest

So I think the interesting part for OpenAI was because they just raised the recent round and there was some fairly public commentary around what the investment case was. You’re right, a lot of it oriented around the idea that they had escape velocity on the consumer side and that ChatGPT was now the cognitive reference and that over time they would be able to aggregate an enormous consumer demand side and charge appropriately for that and that it was much less a play on the enterprise API and application building.

And that’s super interesting if you actually play out what we’ve talked about when you look at their financials, if you take out training runs, if you take out the need for this massive upfront expenditure, this actually becomes a wildly profitable company quite quickly in their projections. And so in a sense it could be better.

Now then the question becomes what’s the defensibility of a company that is no longer step function advancing on the frontier?…

…Chetan

These products are truly, as a software investor, absolutely amazing.

They require a total rethinking from first principles on how these things are architected. You need unified data layers, you need new infrastructure, you need new UI and all this kind of stuff. And it’s clear that the startups are significantly advantaged against incumbent software vendors. And it’s not that the incumbent software vendors are standing still, it’s just that innovator’s dilemma in enterprise software is playing out much more aggressively in front of our eyes today than it is in consumer.

I think in consumer, the consumer players recognize it, are moving it, and are doing stuff about it. Whereas I think in enterprise, even if you recognize it, even if you have the desire to do something, the solutions are just not built in a way that is responsive to dramatic re-architecture. Now could we see this happening? Could a giant SaaS company just pause selling for two years and completely re-architect their application stack?

Sure, but I just don’t see that happening. And so if you just look at any sort of analysis on what’s happening on AI software spend, something like it’s 8x year-over-year growth between 2023 and 2024 on just pure spend. It’s gone from a couple of hundred million dollars to well over a billion in just a year’s time…

…Modest

If you listen to AWS, one of the fascinating things they say is they call AWS a logistics business.

I don’t think anyone externally would sort of look at cloud computing and say, oh yeah, that’s a logistics business. But their point is essentially what they have to do is they have to forecast demand and they have to build supply on a multi-year basis to accommodate it.

And over 20 years they’ve gotten extraordinarily good at what has happened in the last two years, and I talked about this last time, is you have had an enormous surge in demand hitting inelastic supply because you can’t build data center capacity in three weeks. And so if you get back to a more predictable cadence of demand where they can look at it and say, okay, we know now where the revenue generation is coming from.

It’s coming from test time, it’s coming from Chetan and his companies rolling out. Now we know how to align supply with that. Now it’s back to a logistics business. Now it’s not grab every mothballed nuclear site in the country and try to bring it online.

And so instead of this land grab, I think you get a more reasonable, sensible, methodical rollout of it maybe. And I actually would guess that if this path is right, that inference overtakes training much faster than we thought and gets much bigger than we may have suspected.

But I think the path there in the network design is going to look very different and it’s going to have very big ramifications for the people who were building the network, who were powering the network, who were sending the optical signals through the network. And all of that, I think, has not really started to come up in the probability-weighted distributions of a huge chunk of the public market.

And look, I think most people overly fixate on NVIDIA because they are sort of the poster child of this, but there are a lot of people downstream from NVIDIA that will probably suffer more because they have inferior businesses. NVIDIA is a wonderful business doing wonderful things. They just happen to have seen the largest surge in surplus. I think that there are ramifications far, far beyond who is making the bleeding edge GPU, even though I do think there will be questions about, okay, does this new paradigm of test time compute allow for customization at the chip level much more than it would have if we were only scaling on pre-train…

…Modest

If you think about a training exercise, you’re trying to utilize them at the highest possible percent for a long period of time. So you’re trying to put 50, 100,000 chips in a single location and utilize them at the highest rate possible for nine months. What’s left behind is a hundred thousand chip cluster that if you were to repurpose for inferencing is arguably not the most efficient build because inference is peaky and bursty and not consistent.

And so this is what I’m talking about that I just think from first principles you are going to rethink how you want to build your infrastructure to service a much more inference focused world than a training focused world. And Jensen has talked about the beauty of NVIDIA is that you leave behind this in place infrastructure that can then be utilized.

And in a sunk cost world you say, sure, of course if I’m forced to build a million chip supercluster in order to train a $50 billion model, I might as well sweat the asset when I’m done. But from first principles it seems clear you would never build a 350,000 chip cluster with 2 1/2 gigawatts of power in order to service the type of request that Chetan’s talking about.

And so if you end up with much more edge computing with low latency and high efficiency, what does that mean for optical networking? What does that mean for the grid? What does that mean for the need for on site power versus the ability to draw from the local utility?…

…Chetan

Semiconductor company called Cerebras, and they recently announced that inference on Llama 3.1 405 billion for Cerebras is it can generate 900-plus tokens per second, which is a dramatic order-of-magnitude increase. I think it’s like 70 or 75 times faster than GPUs for inference as an example. And so as we move to the inference world, the semiconductor layer, the networking layer, et cetera, there’s tons of opportunities for startups to really differentiate themselves…

…Modest

On a less sort of dramatic view, the way I think about this, there’s AlphaGo, which famously did that move that no one had ever seen, and I think it’s like move 37, everybody was super confused about, ended up winning. And another example I love is Noam Brown, because I like poker, talked about his poker bot confused—it was playing high stakes, no limit, and it continually over-bet dramatically larger sizes than pros had ever seen before.

And he thought the bot was making a mistake. And ultimately it destabilized the pros so much. Think about that. A computer destabilized humans in their approach that they have to some extent taken on over-betting now into their game.

And so those are two examples where if we think about pre-training being bounded by the data set that we’ve given it, if we don’t have synthetic data generation capabilities, here you have two examples where algorithms did something outside of the bounds of human knowledge. And that’s what’s always been confusing to me about this idea that LLMs on their own could get to superintelligence, is functionally they’re bounded by the amount of data we give them up front.

5. Will China Take Over the Global Auto Industry? – Brad Setser

China has, according to the New York Times, the capacity to produce over 40 million internal combustion engine (ICE) cars a year.

Goldman Sachs thinks China will also have the capacity to produce around 20 million electric vehicles by the end of 2024…

…China’s internal market is around 25 million cars, and not really growing —so rising domestic EV sales progressively frees up internal combustion engine capacity for export.   Domestic demand for traditional cars is likely to be well under 10 million cars next year given the enormous shift toward EVs now underway inside China…

…Historically, the autos market has been largely regional (setting aside trade in luxury cars, where volumes are smaller). Most cars sold in China were made in China, most cars sold in Europe are produced in Europe, most cars sold in the North America are produced in North America, and so on. The U.S. did import a few million cars, on net, from Asia, and China imported a million or so luxury cars from Europe, but those were the exceptions rather than the rule.

That could change, absent hefty restrictions on Chinese auto imports (like the 100 percent tariff the U.S. now levies on EVs imported from China).

The global market—with massive overcapacity in China’s internal combustion engine (ICE) sector, massive capacity expansion in China’s EV sector, effectively unlimited credit for Chinese manufacturing firms from China’s state banks, and a Chinese yuan that is weaker against the dollar than it was back in 2008—is pushing for global auto manufacturing to become more like global electronics manufacturing, with a concentration of global production in a single region and, for that matter, a single country…

…Overcapacity in China’s automotive sector is not, in fact, all that new.

China’s traditional automotive sector was dominated by the joint ventures (“JVs”) formed by the large foreign firms and their (typically state-owned) Chinese partners. Chinese auto demand took off after the global financial crisis, and global firms responded by massively expanding their Chinese production capacity – as only the German luxury markets were interested in paying the 25 percent tariff and supplying the Chinese market from abroad.

But demand growth eventually slowed, and by 2018, the Wall Street Journal was reporting that the Chinese market was oversupplied…

…China’s EV industry—like EV industries in the U.S. and Europe—initially received substantial state backing. Chinese EV manufactures benefitted from downstream subsidies that built out China’s battery and battery chemical industry, as well as access to the world’s cheapest steel. EV firms benefitted from cheap state financing—both equity injections from a myriad of state-backed funds and loans from state banks who (still) have to meeting lending quotas.

Moreover, China was quite explicitly protectionist in the application of its “consumer” EV subsidies.

Only EVs that were on state lists of qualifying vehicles were eligible for the subsidy, and the subsidy was only provided to cars that were made in China…

…And initially, only cars that were made in China with a battery made in China by a Chinese firm qualified for the lists…

…The only exception to the basic rule that qualifying for the list required using a battery made in China by a Chinese firm only confirmed the broad pattern of discrimination: Chinese-owned Volvo was allowed to use a Korean battery in one of its early EVs.

State support has not disappeared in any way as China’s EV industry took off.   Looking at direct cash subsidies from the central government to the manufacturers misses the myriad of ways China, Inc helps out firms producing in China…

…Nio received a significant ($1.9 billion) equity investment from the City of Hefei and the Province of Anhui, helping to offset ongoing losses. That equity injection was on top of state support for a factory in Hefei, which The New York Times reports was effectively a gift from the local government.

“‘The local government provided the land and the building’, said Ji Huaqiang, Nio’s vice president for manufacturing. ‘Nio does not own the factory or the land — it is renting, but the factory was custom built for Nio’”

That kind of support explains how Nio managed to build out its EV capacity even when its existing factories weren’t really being used that much:

“Nio’s two factories give it the capacity to assemble 600,000 cars a year, even though its annual rate of sales this autumn [2023] is only about 200,000 cars. Nio is nonetheless already building a third plant.”…

...What’s even more striking is that the investments that built out China’s EV capacity came in a market that was already saturated with modern auto production capacity.  That kind of investment wouldn’t have taken place without state guidance and support, support that was intended both to develop an indigenous Chinese industry (See China 2025) and to support a green transition that would reduce Chinese dependence on import fossil energy. It was the result of policy driven by the central government and backed financially by all levels of government. It also worked, China is now the world leader in EVs and batteries…

…If the world’s global firms can only compete with Chinese firms by using Chinese batteries and Chinese parts, that will hollow out much of the automotive industries of Europe and North America—a European brand on a Chinese-made car with a Chinese battery and drive train won’t sustain the current European auto supply chain or current European employment in the auto industry.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in ASML, Meta, and TSMC. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com