What We’re Reading (Week Ending 19 May 2024) - 19 May 2024
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 19 May 2024):
1. Why Xi Jinping is afraid to unleash China’s consumers – Joe Leahy
Both inside and outside China, there is a strongly held view among many economists that the country could secure a further period of robust growth if it were able to boost consumption by its own citizens. Indeed, faced with a property crisis, President Xi Jinping has taken some one-off measures to stimulate consumption to offset a fall in domestic demand.
But Xi has eschewed more radical medicine, such as cash transfers to consumers or deeper economic reforms. His latest campaign is instead to unleash “new quality productive forces” — more investment in high-end manufacturing, such as EVs, green energy industries and AI.
According to analysts, the reasons for the lack of more radical action on consumption range from a need to generate growth quickly by pumping in state funds — this time into manufacturing — to the more deep-seated difficulties of reforming an economy that has become addicted to state-led investment.
Ideology and geopolitics also play roles. For Xi, China’s most powerful leader since Mao Zedong, the greater the control his country exerts over global supply chains, the more secure he feels, particularly as tensions rise with the US, analysts argue. This leads to an emphasis on investment, particularly in technology, rather than consumption.
Under Xi, security has also increasingly taken precedence over growth. Self-reliance in manufacturing under extreme circumstances, even armed conflict, is an important part of this, academics in Beijing say…
…The pressure on Beijing to find a new growth model is becoming acute, analysts say. China has become too big to rely on its trading partners to absorb its excess production.
“The exit strategy has to be, at the end of the day, consumption — there’s no point producing all this stuff if no one’s going to buy it,” says Michael Pettis, a senior fellow at the Carnegie Endowment in Beijing.
Few projects capture Xi’s vision for 21st-century Chinese development as well as Xiongan, a new city being built on marshlands about 100km from Beijing…
…Xiongan unites many of Xi’s favourite development themes. Through vast investment in mega-infrastructure projects such as a high-speed rail hub, Xiongan aims to bring state-owned enterprises, universities and entrepreneurs together to concentrate on high-technology innovation, from autonomous vehicles and life sciences to biomanufacturing and new materials. As of last year, about 1mn people were living there, $74bn had been invested and 140 companies had set up there, Beijing says.
Conspicuously absent from the city plans are strategies to encourage the thing China’s economy lacks most — domestic consumption. In guidelines released in 2019 for Xiongan by Xi’s cabinet, the State Council, there was no mention of the term “consumption”, except for “water consumption”…
…China’s investment to gross domestic product ratio, at more than 40 per cent last year, is one of the highest in the world, according to the IMF, while private consumption to GDP was about 39 per cent in 2023 compared to about 68 per cent in the US. With the property slowdown, more of this investment is pouring into manufacturing rather than household consumption, stimulating oversupply, western critics say…
…Economists suspect that behind the rhetoric, the investment in manufacturing is partly pragmatic. With the property market still falling three years after the crisis began, and many indebted provinces ordered to suspend large infrastructure projects, Xi needs to find growth somewhere to meet his 5 per cent target for this year.
“The bottom line is they want growth in output and they want the jobs associated with that growth,” says Stephen Roach, a faculty member at Yale and former chair of Morgan Stanley Asia. He says when “they’re clamping down on property, it doesn’t leave them with much choice but to go for a production-oriented growth stimulus”…
…In areas vital to China’s national security, the country needed supply chains that “are self-sufficient at critical moments”, he said. “This will ensure the economy functions normally in extreme circumstances.”
HKU’s Chen says China no longer measures its “national power” in purely economic terms “but more importantly, in terms of military . . . capacity. And this is why manufacturing is very important”.
He says in this vision of the world, consumption is a lower priority…
…The Rhodium Group argues that some of the loans that flowed into the industrial sector last year went to local government finance vehicles, the heavily indebted off-balance sheet investment holding companies of provinces and municipalities.
While large sums still went to manufacturers, they “do not have a strong appetite to expand capacity given falling prices”, Rhodium said in a report.
Economists say that for consumers to feel comfortable to spend more, particularly after the property slump, China needs to step up its development of social welfare programmes and healthcare. While China has made strides in building out its public pension and healthcare systems, they are still lacking.
But such solutions would take a long time to boost consumer confidence and would require massive new funding from government coffers that are running dry.
Greater consumption would also necessarily mean reducing the role of manufacturing or investment in the economy. This could be done by unwinding China’s intricate system of subsidies to producers, which includes government infrastructure investment, access to cheap labour, land and other credit, says Pettis.
But if that was done in a big bang fashion, the share of household consumption to GDP would increase while overall GDP would contract as manufacturers suffered. This was obviously not a politically preferable option for Xi.
2. Strategy Reviews – John H. Cochrane
After an extensive extended and collective deliberation, the Fed adopted a new strategy framework known as Flexible Average Inflation Targeting. This framework was explicitly designed by a worldview that “the federal funds rate is likely to be constrained by its effective lower bound more frequently than in the past,” and a consequent judgement that “downward risks to employment and inflation have increased.” A shift to “inclusive” employment, a return to the old idea that economic “shortfalls” can be filled, and a promise not to preempt future inflation but rather let inflation run hot above 2% to make up past shortfalls followed. These promise of future dovishness were hoped to stimulate demand in the short run.
In short, the Fed adopted an elaborately-constructed new-Keynesian forward-guidance defense against the perceived danger of deflation and stagnation at the zero bound.
No sooner was the ink dry on this grand effort, however, than inflation shot up to 8%, and the zero bound seemed like a quaint worry. Something clearly went drastically wrong. Naturally, the first question for a strategy review is, how can we avoid having that happen again?
Inflation eased without interest rates substantially higher than inflation or a large recession. I think I have a (and the only) clear and simple explanation for that, but I promised not to digress into a fiscal theory today. Still inflation is persistently high, raising the obvious worry that it’s 1978 again. Obviously, central banks have a range of worries on which to focus a new strategy, not just a return to a long-lasting zero bound. (Though that could happen too.)…
…React or guide? It seems clear to me that policy will have to be described more in terms of how the Fed will react to events, rather than in standard forward guidance terms, unconditional promises of how the funds rate will evolve. It will involve more “data-dependent” rather than “time-dependent” policy.
In part, that must come, I think, as a result of the stunning failure of all inflation forecasts, including the Fed’s. Forecasts did not see inflation coming, did not see that it would surge up once it started, and basically always saw a swift AR(1) response from whatever it was at any moment back to 2%. Either the strategy review needs to dramatically improve forecasts, or the strategy needs to abandon dependence on forecasts to prescribe a future policy path, and thus just state how policy will react to events and very short-term forecasts. I state that as a question for debate, however…
…Fiscal limitations loom. Debt to GDP was 25% in 1980, and still constrained monetary policy. It’s 100% now, and only not 115% because we inflated away a bunch of it. Each percentage point of real interest rate rise is now quickly (thanks to the Treasury’s decision to issue short, and the Fed’s QE which shortened even that maturity structure) a percentage point extra interest cost on the debt, requiring a percent of GDP more primary surplus (taxes less spending). If that fiscal response is not forthcoming, higher interest rates just raise debt even more, and will have a hard time lowering inflation. In Europe, the problem is more acute, as higher interest costs could cause sovereign defaults. Many central banks have been told to hold down interest rates to make debt more sustainable. Those days can return…
…Ignorance. Finally, we should admit that neither we, nor central banks, really understand how the economy works and how monetary policy affects the economy. There is a complex verbal doctrine that bounces around central banks, policy institutions, and private analysts, asserting that interest rates have a relatively mechanical, reliable, and understood effect on “spending” through a “transmission mechanism” that though operating through “long and variable lags” gives the Fed essentially complete control over inflation in a few years. The one thing I know from 40 years of study, and all of you know as well, is that there is no respectable well-tested economic model that produces anything like that verbal doctrine. (More here.) Knowing what you don’t know, and that nobody else does either, is knowledge. Our empirical knowledge is also skimpy, and the historical episodes underlying that experience come with quite different fiscal and financial-structure preconditions. 1980 was a different world in many ways, and also combined fiscal and microeconomic reform with high interest rates.
3. Big Tech Capex and Earnings Quality – John Huber
Capex is not only growing larger, but the rate of growth is set to accelerate this year as they invest in the AI boom. Combined capex at MSFT, GOOG and META is set to grow around 70% in 2024. As a percentage of sales, capex will grow from 13% of sales in 2023 to around 20% in 2024…
…Bottom line: the other Big Techs are getting far more capital intensive than they have in the past. Their FCF is currently lagging net income because of the large capex, and this will eventually flow through to much higher depreciation charges in the coming years.
This is not necessarily worrying — if the returns on these investments are good, then sales growth will be able to absorb these much higher expenses. But this is not a sure thing, so I like to use P/FCF metrics as I think a large majority of the assets they’re investing in will need to be replaced. This means the capex levels we see currently could be recurring. So, while the P/E ratios range from 25 to 35, the P/FCF ranges from 40-50.
Again, if the investments are able to earn good returns, then profit margins will remain intact, but one thing to notice is FCF margins (while very strong) have not kept up with GAAP profit margins: e.g. at MSFT, FCF margins have declined slightly from 28% to 26% over the last decade while net margins have expanded from 25% to 36%, leaving GAAP profit margins far in excess of FCF margins. Eventually, as growth slows these margins will tend to converge as depreciation “catches up” to cash capex spend. Whether net margins come down or FCF margins move up simply depends on the returns on capital earned and the growth it produces.
I’m not predicting a poor result, but I’m mindful of how difficult it will be given how different the companies are today. They used to grow with very little capital invested, but now they have a mountain of capital to deploy, which is obviously much harder at 7 times the size:…
…I don’t think anyone (including management) yet knows what the returns on the $150 billion of investments that these three companies will spend in 2024. They are optimistic, but it’s not clear cut to me.
Think about how much profit needs to be generated annually to earn acceptable returns on this capex: a 10% return would require $15 billion of additional after tax profits in year 1. As Buffett points out, if you require a 10% return on a $150 billion investment but get nothing in year 1, then you’d need $32 billion in year 2, and just one more year of deferred returns would require a massive $50 billion profit in year 3.
What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!
What’s staggering is that the above is the return needed to earn 10% on just one year’s worth of capex. Even if we assume that capex growth slows from 70% this year down to 0% in 2025 and stays there, MSFT, GOOG and META will invest an additional $750 billion of capital over the next 5 years!
4. A Few Short Stories – Morgan Housel
Thirty-seven thousand Americans died in car accidents in 1955, six times today’s rate adjusted for miles driven.
Ford began offering seat belts in every model that year. It was a $27 upgrade, equivalent to about $190 today. Research showed they reduced traffic fatalities by nearly 70%.
But only 2% of customers opted for the upgrade. Ninety-eight percent of buyers preferred to remain at the mercy of inertia.
Things eventually changed, but it took decades. Seatbelt usage was still under 15% in the early 1980s. It didn’t exceed 80% until the early 2000s – almost half a century after Ford offered them in all cars.
It’s easy to underestimate how social norms stall change, even when the change is an obvious improvement. One of the strongest forces in the world is the urge to keep doing things as you’ve always done them, because people don’t like to be told they’ve been doing things wrong. Change eventually comes, but agonizingly slower than you might assume…
…When Barack Obama discussed running for president in 2005, his friend George Haywood – an accomplished investor – gave him a warning: the housing market was about to collapse, and would take the economy down with it.
George told Obama how mortgage-backed securities worked, how they were being rated all wrong, how much risk was piling up, and how inevitable its collapse was. And it wasn’t just talk: George was short the mortgage market.
Home prices kept rising for two years. By 2007, when cracks began showing, Obama checked in with George. Surely his bet was now paying off?
Obama wrote in his memoir:
George told me that he had been forced to abandon his short position after taking heavy losses.
“I just don’t have enough cash to stay with the bet,” he said calmly enough, adding, “Apparently I’ve underestimated how willing people are to maintain a charade.”
Irrational trends rarely follow rational timelines. Unsustainable things can last longer than you think…
…John Nash is one of the smartest mathematicians to ever live, winning the Nobel Prize. He was also schizophrenic, and spent most of his life convinced that aliens were sending him coded messages.
In her book A Beautiful Mind, Silvia Nasar recounts a conversation between Nash and Harvard professor George Mackey:
“How could you, a mathematician, a man devoted to reason and logical proof, how could you believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world?” Mackey asked.
“Because,” Nash said slowly in his soft, reasonable southern drawl, “the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.”
This is a good example of a theory I have about very talented people: No one should be shocked when people who think about the world in unique ways you like also think about the world in unique ways you don’t like. Unique minds have to be accepted as a full package.
5. An Interview with Databricks CEO Ali Ghodsi About Building Enterprise AI – Ben Thompson and Ali Ghodsi
So you said you came over to the U.S. in 2009. Did you go straight to UC Berkeley? There’s some great videos of you giving lectures on YouTube. You’re still an adjunct professor there. Do you ever teach anymore or is this a, “Homeboy made good, we’ll give him the title forever”, sort of situation?
AG: No, I teach about a class a year and I still enjoy really doing that. I imagine if I had nothing to do, that’s a job I would actually enjoy doing.
So yeah, I came to the United States just to stay here one year and do research at UC Berkeley and just ended up staying another year, another year, another year. And the timing was — we didn’t know it at the time, but Dave Patterson, who was a professor at UC Berkeley, and now Turing Award winner, which is the Nobel Prize in computer science essentially, said at the time, “We’ve had Moore’s Law, but we no longer know how to make the computers faster and cramming more transistors. That era is over, so computers are not going to get any faster”, and we know he was right, they’re all between two and four gigahertz since then.
So we need the new computer, and the new computer is the cloud, and it also needs new software, so we built all this software stack — the era of data and AI. So it was the perfect time. I always regretted, “Why was I not born in the ’50s or ’60s when computers happened?” — well, actually it kind of happened again in ’08, ’09, ’10, and Berkeley was at the forefront of that. So we were super lucky to see that kind of revolution and being part of that…
…The general idea is you mentioned you started out with Mesos where you needed to compute in parallel instead of serially so you have to have a cluster of computers, not just one. Spark lets you basically do the same thing with data, spread it out over a huge number of computers. You can end up with massive amounts of data, structured, unstructured, people will call it like a “data lake”. There’s a data lake, there’s a data warehouse, there’s a Data Lakehouse. Walk me through the distinction and where that applies to Databricks and its offering.
AG: At the time, the world was kind of split. Those that have structured data, structured data are things that you can represent in tables with rows and columns, those were in data warehouses and you could connect your BI tools, business intelligence tools, that lets you ask questions about the past from those rows and columns. “What was my revenue last week in different regions, by different products, by different SKUs?”, but you couldn’t ask questions about the future.
Then at the same time, we had these future looking workloads, which were, “Okay, we have all kinds of text, images, and unstructured data that’s coming into the enterprise,” and that you couldn’t store in these structured tables, they cannot be represented as tables of rows and columns, those you stored in what’s called data lakes. But then the good news was if you knew what you were doing, you could ask questions about the future, “What’s my revenue going to be next week? Which customer is going to churn next?”. But these worlds were living completely separately and securing them was very hard and there was a lot of redundant stacks that were being built up at the same time.
Our idea was how do we, 1) unify this and 2) how do we disrupt the existing ecosystem? How do we create the company that’s disruptive? And our idea was what if we have open source technology, everybody stores all their data, both the structured and unstructured data in the lake, which is just basically almost free storage by the cloud vendors, but we standardize the format in an open source format, so it almost becomes like USB — you can plug anything in there. Then we build an engine that can do both the BI stuff, backwards looking questions, and the futuristic AI stuff, and that’s what we call the Lakehouse, which is a portmanteau of data lakes and their warehouses. The marketing firms we talked to and anyone we’d ask said, “This is a terrible idea”…
…So you’ve been using the word AI a lot. Did you use the word AI a lot five years ago?
AG: I think we used the word ML quite a bit.
Yeah, machine learning. That’s right, there’s a big word branding moment. I mean, there was the ChatGPT moment, so I guess there’s two questions. Number one, did that shift how you were thinking about this space, or was this already super clear to you? But then number two, I have to imagine it fundamentally changed the way your customers were thinking about things and asking about things.
AG: Yeah, from day one we were doing machine learning. We actually built MLlib as part of Spark already before we actually started Databricks. Actually the first use case of Spark in 2009 was to participate in the Netflix competition of recommending the best movie, and we got the second prize actually, we didn’t get the first prize.
The whole point about being able to distribute broadly and do things in a highly parallel manner, I mean we’re basically in that world.
AG: Exactly. Well, a lot of people also use that parallel worlds to just do backwards processing, like a data warehouse that tells you, “Tell me everything about the past”, and it’s great to see trend lines about the past, but to use this kind of more advanced statistical approach, that’s when you venture into machine learning. We were doing it already in 2012, ’13, I tried to push the company to use the phrase AI instead of ML, the most hardcore academics in the company were against it. They said that AI was a buzzword but I said, “No, I think that’s actually what resonates with people”. But at the same time we were seeing more and more deep neural networks so these neural networks are getting stacked do better and better.
Around 2018 is when we started seeing especially language processing, natural language processing, getting more and more applications on the platform. We saw insurance companies using them to analyze huge amounts of texts to assess risks, we saw translation starting to happen, we saw pharma companies analyzing big amounts of electronic medical records that were written, unstructured text. So it was pretty clear that something is going on with NLP [Natural Language Processing] and that just accelerated during the pandemic. So we saw it, we already had over a thousand customers using these kind of transformer models. So when ChatGPT came out, we kind of thought it’s a nothing burger, but of course we were wrong in that it was an absolute awareness revolution.
Yes, exactly.
AG: What we took for granted was not what the rest of the world was taking for granted. So we feel like the world woke up to AI in November 2022 with ChatGPT, though the truth is it had been going on for 20 years.
That’s what strikes me. That’s the biggest impact is number one, you had the total rebranding of everything to AI, my washing machine now has AI, what a miracle. But just the fact that you went through this when you started with Spark, you thought this is a great idea, no one knows what it is. Now suddenly people are asking you, knocking on your door, “We have data on your thing, can we run ChatGPT on it?” — is that how those conversations went?
AG: Yeah, I mean literally before ChatGPT, I would tell the marketing department to tone down the AI language because customers would say, “Hey, this AI stuff is futuristic, we have concrete problems right now with data that we need to solve”, so I actually shot down a marketing campaign and marketing was really upset about it, which said, “Customer X is a data and AI company, Customer Y is a data and AI company”. They had it ready to go and I shot it down and I said, “We don’t want to push so hard on AI because people don’t really want AI”, and then literally after ChatGPT happened, I told them, “Hey, that campaign from a couple of years ago, maybe we should run it now” — which we did actually and people loved it. So yeah, it’s just the market was just not ready…
…All right, number three, Databricks solves Mosaic ML’s need to build a sales force and Mosaic ML solves Databricks need to build a sustainable differentiated business around an open source project.
AG: Yes, I think you are 99% right? I would modify that last sentence to say —
I didn’t give you enough credit for how much you had differentiated to date?
AG: No, I actually think that you kind of were spot on, but I would say with open source, I would say that it was Mosaic ML having a research team that really was deep in LLM research and AI, it was hard to come by at the time and it was very, very hard actually to hire those researchers that really gave us that. And then the know-how to customize LLMs on your data in a secure way.
How does that work? How do you do that?
AG: So this is what their specialty was. When everybody else was building one giant model or a few giant models that are supposed to be very smart, these guys, their business model was, “We build it again and again and again, custom either from scratch or from an existing checkpoint, you tell us or we can fine tune it, but we can help you build an LLM for you and we will give you the intellectual property of that LLM and its weights”. That way you as a customer can compete with your competitors and in the long run you become a data and AI leader just like our billboards that I had banned a few years earlier say. You’re going to be a data and AI company. It doesn’t matter if you’re a pharma company or a finance company or a retail company, you’re actually going to be a data and AI company, but for that you need intellectual property. Elon Musk is not just going to call OpenAI for his self-driving capabilities, he needs to have his own. Same thing is going to be true for you in finance, retail, media. So that was their specialty, but we had the data.
Is that actually true though? Do they actually need to have their own intellectual property or is there a sense — my perception, and I picked up on this, I was at some sort of conference with a bunch of CEOs, it struck me how they had this perception of, “We’ve had this data for years, we were right to hold onto it, this is so valuable!”, and I’m almost wondering, are you now so excited about your own data that you’re going to be over protective of it? You’re not going to want to do anything, you’re actually going to sort of paralyzed by, “We have so much value here, we have to do it ourselves”, and miss out on leveraging it sooner rather than later because you’re like, “It has to be just us”.
AG: No, I do think that people have now realized how valuable their data is, there’s no doubt about that and it is also true, I believe in it. The way I think of it is that you can think of the world as two kind of parallel universes that coexist these days with LLMs. We’re super focused on one, which is the kind of open Internet and the whole crawl of everything that’s in it and all of the history of mankind that has been stored there. Then you’re building LLMs that’s trained on that and they become intelligent and they can reason and understand language, that’s what we’re focused on.
But we’re ignoring this other parallel universe, which is every company on the planet that you join have you sign an NDA, an employee agreement, and then that gives you access to all this proprietary data that they have on the customers and everything else, and they have always been protective of that. The LLMs today that we are training and we’re talking about, they don’t understand that data, they do not understand the three letter acronyms in any organization on the planet.
So we do the boring LLMs and the boring AI for those enterprises. We didn’t have quite the muscle to do it without Mosaic, they really understood how to build those LLMs, we had the data already. So we had the data and we had the sales force, Mosaic did not have the data, they did not have the sales force, they did have the know-how of how to build those custom models.
I don’t think that the companies are hamstrung and they’re not going to do anything with it, they want to do things with it. I mean, people are ready to spend money to do this. It’s just that I feel like it’s a little bit of a 2007 iPhone moment. iPhone comes out, every company on the planet says, “We have to build lots of iPhone apps, we have to”. Then later it turns out, “Well, okay, every company building a flashlight app is maybe not the best use of resources, in fact, maybe your iPhone will just have a flashlight in it”. So then it comes back to what special data do you have that no one else has, and how can we actually monetize that?
How does it actually work to help companies leverage that? So you released a state-of-the-art open LLM, DBRX, pretty well regarded. Do you do a core set of training on open data on whatever might exist and then you’d retrain it with a few extra layers of the company’s proprietary data and you have to do that every time? How modular is that? How does that actually work in practice?
AG: Yeah, there’s a whole slew of different techniques ranging from very, very lightweight fine tuning techniques. The most popular one is called LoRA, low rank adaptation, to actually training a chunk of the model. So you take an existing model that’s already trained and it already works and you customize a bunch of the layers to what’s called CPT, continuous pre-training, in which case you actually train all of the layers of the model, an existing model that’s already baked and ready, but you train all of the layers. It costs more to do that to all the way if you’re doing something really different. So if the domain that you’re using for the data set is significantly different, then you want actually what’s called pre-train, which is train the model from scratch. If you’re a SaaS application and LLMs is the core of the offering, you probably want to have a pre-trained model from scratch, so we can do all of those.
I would say the industry is not actually a hundred percent, the research is not a hundred percent clear today of when should you use which, where. We have a loose idea that if you don’t have huge amounts of data and it’s kind of similar in domain to what the LLM already can do, then you can probably use the more lightweight ones, and if your data is very different and it’s significant, then probably the lightweight mechanisms are not good for you, and so on. So we have a research team that really can do this really, really well for enterprises. But I think a lot of progress is going to happen in the next few years to determine how can we do this automatically? How do we know when to use them? And there might be new techniques also that are developed.
What’s the trade-off? I imagine you talk to a company, we absolutely want the most accurate model for sure, we want it totally customized to us. And then you’re like, “Okay, that’s going to cost XYZ, but then also to serve it is going to cost ABC”. The larger a model is the more expensive it is to serve and so your inference costs are just going to overwhelm even the upfront costs. What’s that discussion like and trade-off like that you’re having with your customers?
AG: Well, the vast majority have lots of specific tasks that they want to do. So again, a lot of people are thinking of things like ChatGPT, which are sort of completely general purpose open-ended, ask me anything. But enterprise typically have, “Okay, I want to extract labels from all this core piece of data, I want to do it every day like ten times a day”, or, “I want to tag all of these articles with the right tags and I want to do that very accurately”. So then actually for those specific tasks, it turns out you can have small models. The size of the model helps you actually be much cheaper and that matters at scale and then they are really, really concerned about quality and accuracy of that, but for that specific task, it doesn’t need to nail a super balanced answer to the question of whether there was election fraud or not in 2020.
(laughing) Right.
AG: It just needs to really extract those tags really, really well, so then there are techniques you can use to that. There is a way where you can actually have your cake and eat it too, assuming that the task you want to do is somewhat narrow.
But we also have customers that are, “No, I’m building a complete interactive general-purpose application in say, many of the Indian dialects in India, and I want to do that, and existing models are not very good at that, help me do that”. Then you have to go for a bigger model but bigger is usually more expensive. Of course, we are using the mixture of experts architecture, which we think is where the world is headed and which is what people also think what GPT-4 was based on, but we’ve also seen with Llama 3 from Meta that dense models, that are not mixture of experts, are also excellent and they’re doing really, really well…
…Is there a difference between domestic and international in terms of the aggressiveness with which they’re approaching AI?
AG: Yeah, I would say that China is moving very, very fast on AI. Some Asian countries, there’s less regulation. Europe, I feel is lagging always, has been lagging a few years behind the United States, and they’re concerned about — there’s also competitive concerns with so many American companies, cloud companies and so on from Europe. So Europe is a little bit more regulated and a few years usually lagging United States.
That’s what we’re seeing, but there’s regional differences. Like India is very interesting because it’s moving so fast, there’s no signs of anything that’s recession-like over there. There are markets like Brazil and so on that are doing really well. So really, you have to go case-by-case, country-by-country. We have significant portion now of our business in Europe as well, and also now a growing business in Asia and also Latin America.
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms, and Microsoft. Holdings are subject to change at any time. Holdings are subject to change at any time.