What We’re Reading (Week Ending 30 June 2024) - 30 Jun 2024
Reading helps us learn about the world and it is a really important aspect of investing. The late Charlie Munger even went so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 30 June 2024):
1. An Interview with Scale AI CEO Alex Wang About the Data Pillar for AI – Ben Thompson and Alex Wang
When you saw that there was going to be a third pillar, yet no one was there, did you have any particular insights on how that would work, or was it just a matter of, “There’s a problem space that needs solving, and we’ll figure out how to solve it in the future”?
AW: Yeah. Probably the most formative, immediate experience was that I was training one of a neural network at this time on a single GPU in Google Cloud and using TensorFlow, and it was a neural network that detected emotion based on a photo of someone’s face, and all I did basically was I took the tutorial for ImageNet, so basically literally the tutorial code for a very different image recognition algorithm, and then I just swapped out the data set and then pressed “Enter”. Then 12 hours later, I had a neural network that smashed any of the other methods on this problem of recognizing emotion from images.
So the takeaway there is actually, data is what matters most.
AW: Yeah. From problem to problem, data is the only thing that varies, is maybe the better way to put it, and as a programmer, you kind of realize, “Oh, actually data is what’s doing all the actual programming and my insight into the problem doesn’t actually matter, it’s just all embedded in the data set that the model ends up getting trained on”.
So I think, A) I knew that data was very important. I remember this realization, the model ended at some performance, and I was like, “Okay, I’ve got to make this model better,” and so then I was like, “Okay, how am I going to improve on this data set?”, and then there was the second light bulb, which is that this is an incredibly painful process. You open up all the images and then you go through and you just look at, “Okay, are the labels for all the images correct?”, and then you’re like, “Okay, what new images should I get to pull into this?”, and then, “How am I going to get those labeled?”, and so all of the core operations, so to speak, of updating or changing or improving the data set were incredibly painful.
So I started the company in 2016, and this was an era where there was a broad-based recognition that platforms, particularly developer platforms that made very ugly things very easy were good businesses. It was already clear that AWS was ridiculously successful as a business, the most successful enterprise business that had ever existed, and then Stripe, it was also clearly recognized that Stripe was very successful, and so as a student of those companies realized that, “Hey, we should take this incredibly messy and complicated thing that exists today, and then figure out how to turn that into a beautiful developer UX and if we can accomplish that, then there’s a lot of value to be had here”.
There’s a lot to unpack there. Just as a broader philosophical point, do you think that insight about data still holds? So it’s not just that there’s three pillars, compute, algorithm, and data, but actually data is the most important, and just like you saw before, is it more complicated now or is even more the case?
AW: Yeah, I think it’s proving to be more and more the case. I was at an event with a lot of other AI CEOs recently, and one of the dinner conversations is, “Okay, compute, power, data: which do you run out of first?”, and the consensus answer around the room is data, and I think the data wall has become over the past few months, a pretty commonly debated topics. “Are we hitting a data wall in LLM development, or are we just fundamentally coming against the limits of data?” Even the most liberal assumptions around, let’s assume that you really did train on all human-generated text, which no sensible person does because you filter out all the bullshit, but if you did train on all human-generated texts, even then we will run out by 2027, 2028.
So just overall in terms of the sheer amount of data that’s necessary to keep up with scaling, we’re very clearly hitting some meaningful wall, and then if you look at, I think a lot of the model performance improvements as of late, or sort of the big gains in models, my personal reason, I think a lot of that actually boils down to data, and innovations on how to use data, and innovations on basically the data-intensive parts of the AI stack…
…How have the needs of the market shifted then? You mentioned that you were getting at this before and I interrupted. You start out with images for self-driving cars, today it’s all about these text-based models. What is entailed in going from images to text?
AW: We had an interesting mid-step here, which is broadly speaking, I think the shift as the models have increased in intelligence is towards greater levels of expertise. But basically, we started autonomous vehicles and then starting about 2020 we actually started working with the government, the US government and this was driven because I grew up in Los Almos and realized that AI is likely a very important technology for our security.
We can do a side bit here, you wrote a very interesting piece on Substack in 2022, The AI War and How to Win It. Give me your thesis here and why you think it’s a big deal.
AW: Yeah, I think that the basic gist is first, if you look at the long arc of human history, it is punctuated by war. In some sense, human history is all about war, and then if you look at the history of war, then the history of war in some sense is all about technology. If you look at particularly the transitions from World War I to World War II to future wars, the Gulf War for example, the most significant bit so to speak, or the largest factor in how these wars end up playing out really, is access to technology. Obviously this is deep to my upbringing, grew up in Los Alamos, basically every year you have a multi-day history lesson on Los Alamos National Lab and the origins thereof.
So then you think about, “Okay, what are the relevant technologies today that are being built?”, and there’s a host of technologies I think are important, hypersonic missiles, space technology, et cetera. But AI is, you could very easily make the case, that it is the most important. If you could solve problem solving, then all of a sudden you have this incredibly powerful advantage.
If you believe that AI is really important for hard power, for American hard power, which is very important for I think ensuring that our way of life continues, then the most shocking thing for me was looking at, was going through and looking at the things that the CCP [Chinese Communist Party] were saying about AI, and there are CCP officials who have very literally said, “We believe that AI is our opportunity to become the military superpower of the world”. That we believe that roughly speaking, they said, “Hey, the Americans are not going to invest enough into AI, and so we’ll disrupt them by investing more into AI proportionally, and if we do so, even though we spend a lot less on our military, we will leapfrog them in capability”. This is, I think as a startup person, this is the core Innovator’s Dilemma or the core disruptive thesis that the CCP had basically a disruptive thesis on war powered by artificial intelligence.
This is basically the idea that you’re going to have these autonomous vehicles, drones, whatever, of all types controlled by AI, versus the US having these very sophisticated but operated by humans sort of systems, and the US will fall into the trap of seeking to augment those systems instead of starting from scratch with the assumption of fully disposable hardware.
AW: Yeah, I think there is at its core two main theses. One is perfect surveillance and intelligence in the sort of CIA form of intelligence, and this I think is not that hard to believe. Obviously, in China, they implemented cross-country facial recognition software as their first killer AI app, and it doesn’t take that much to think, “Okay, if you have that, then just extend the line and you have more or less full information about what’s happening in the world” and so that I think is not too hard to imagine.
Then the hot war scenarios is to your point, yeah, autonomous drone swarms of in land, air or sea that are able to coordinate perfectly and outperform any human.
I think when people hear AI, they think about the generative AI, LLMs, OpenAI, whatever it might be, and assume that’s a US company, Google’s a US company, et cetera, and so the US is ahead. This is obviously thinking about AI more broadly as an autonomous operator. Is the US ahead or what’s your perception there?
AW: I think that on a pure technology basis, yes, the US is ahead. China’s caught up very quickly. There’s two very good open source models from China. One is YiLarge, which is the model from Kai-Fu Lee‘s company, 01.ai. And then the other one is Qwen 2, which is out of Alibaba and these are two of the best open source models in the world and they’re actually pretty good.
Do they use Scale AI data?
AW: No, we don’t serve any Chinese companies for basically the same reasons that we’re working with the US military. YiLarge is basically a GPT-4 level model that they open-sourced and actually performs pretty well, so I think that on the technology plane, I think the US is ahead and by default I think the US will be maintaining a lead.
There’s an issue which Leopold Aschenbrenner recently called a lot of attention to, which is lab security. So we have a lead, but it doesn’t matter if, it can all be espionaged away basically and there’s this case recently of this engineer from Google, Linwei Ding who stole the secrets of TPU v6 and all these other secrets.
And wasn’t discovered for six months.
AW: Yeah, it wasn’t discovered for six months and also the way he did it was that he copy-pasted the code into Apple Notes and then exported to a PDF, and that was able to circumvent all the security controls.
So how does this tie into this middle stage for you of starting to sign government contracts? What were those about?
AW: Yeah, so I basically realized, and the punchline of what I was going through was that the United States was, by default, going to be bad at integrating AI into national security and into the military and a lot of this is driven by, for a while — this is less true now, but for a while — tech companies actively did not want to help the DOD and did not actively want to help US military capabilities based on ideology and whatnot, and even now the DOD and the US government are not really that great at being innovative and have a lot of bureaucracy that prevent this. So I decided basically like, “Hey, Scale, we’re an AI company, we should help the US government”.
We started helping them and we started working with them on all of their data problems that they needed to train specialized image detectors or specialized image detection algorithms for their various use cases, and this was the first foray into an area that required a lot of expertise to be able to do effectively, because at its core, the US government has a lot of data types and a lot of data that are very, very specialized. These are specialized sensors that they pay for, they’re looking at things that generally speaking the general population doesn’t care about, but they care a lot about — movement of foreign troops and the kinds of things that you might imagine military cares about — and so required data that was reflective of all of the tradecraft and nuance and capabilities that were necessary, so this was one of the first areas.
We actually have a facility in St. Louis, which have people who are by and large trained to understand all this military data to do this labeling.
So this was a clear separation then from your worldwide workforce?
AW: Yeah, exactly. It was a clear break in the sense that we were doing problems that almost anyone in the world could, with enough effort, do effectively and do well, to almost like the Uber driver, a very broad marketplace view, to something that required niche expertise and niche capability to do extremely well.
This sort of phase transition of data — there’s sort of a realization for us that, “Oh, actually in the limit almost all of the data labeling, almost all the data annotation is going to be in the specialized form”, because the arc of the technology is, first we’re going to build up all this generalized capability, and this will be the initial phase building of all these general capability, but then all the economic value is going to come from specializing it into all these individual specific use cases and industries and capabilities and it flowing into all the niches of the economy…
…So where does synthetic data come into this?
AW: Yeah, synthetic is super fascinating. So I think that this has become super popular because we’re hitting a data wall, in some ways the most seductive answer to the data wall is, “Oh, we’ll just generate data to blow past the data wall”, generate data synthetically using models themselves. I think the basic results are that, at a very high level, synthetic data is useful, but it has a pretty clear ceiling because at it’s core you’re using one model to produce data for another model, so it’s hard to blow past the ceiling of your original model at a very fundamental level.
It’s a compressed version of what went into the original model.
AW: Yeah, exactly. It’s a very good way to compress insight from one model to get to another model, but it’s not a way to push the frontier of AI, so to speak…
…So basically this is huge problem everyone is running into, it’s incredibly hard to solve and so someone is going to need to solve it and you’ve been working on it for eight to ten years or however long it’s been. The thesis seems pretty fairly straightforward, even if the margins are not necessarily going to be Nvidia-style margins, given that you have to use hundreds of thousands of humans to do that.
AW: Yeah and I think the other key nuance here, the other interesting thing, is today our revenue is 1% of Nvidia’s because, by and large, the budgets are mostly allocated towards compute. I think as with any portfolio optimization problem, in time, if data is actually the biggest problem, the percent of budgets that are allocated to data versus compute will slowly shift over time. So we don’t have to be half the budgets, even if we get to 5% of the budgets or 10% of the budgets versus 1% of the budgets, then there’s a pretty incredible growth story for data.
2. My Stock Valuation Manifesto – Vishal Khandelwal
1 .I must remember that all valuation is biased. I will reach the valuation stage after analyzing a company for a few days or weeks, and by that time I’ll already be in love with my idea. Plus, I wouldn’t want my research effort go waste (commitment and consistency). So, I will start justifying valuation numbers.
2. I must remember that no valuation is dependable because all valuation is wrong, especially when it is precise (like target price of Rs 1001 or Rs 857). In fact, precision is the last thing I must look at in valuation. It must be an approximate number, though based on facts and analysis.
3. I must know that any valuation method that goes beyond simple arithmetic can be safely avoided. If I need more than four or five variables or calculations, I must avoid that valuation method…
…10. I must remember that good quality businesses often don’t stay at good value for a long time, especially when I don’t already own them. I must prepare in advance to identify such businesses (by maintaining a watchlist) and buy them when I see them priced at or near fair values without bothering whether the value will become fairer (often, they do).
11. I must remember that good quality businesses sometimes stay priced at or near fair value after I’ve already bought them, and sometimes for an extended period of time. In such times, it’s important for me to remain focused on the underlying business value than the stock price. If the value keeps rising, I must be patient with the price even if I need to wait for a few years (yes, years!)…
…13. Ultimately, it’s not how sophisticated I am in my valuation model, but how well I know the business and how well I can assess its competitive advantage. If I wish to be sensible in my investing, I must know that most things cannot be modeled mathematically but has more to do with my own experience in understanding businesses.
14. When it comes to bad businesses, I must know that it is a bad investment however attractive the valuation may seem. I love how Charlie Munger explains that – “a piece of turd in a bowl of raisins is still a piece of turd”…and…“there is no greater fool than yourself, and you are the easiest person to fool.”
3. I Will F****** Piledrive You If You Mention AI Again – Nikhil Suresh
I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders…
…Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaught…
…Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course f****** catastrophe…
…A friend of mine was invited by a FAANG organization to visit the U.S a few years ago. Many of the talks were technical demos of impressive artificial intelligence products. Being a software engineer, he got to spend a little bit of time backstage with the developers, whereupon they revealed that most of the demos were faked. The products didn’t work. They just hadn’t solved some minor issues, such as actually predicting the thing that they’re supposed to predict. Didn’t stop them spouting absolute gibberish to a breathless audience for an hour though! I blame not the engineers, who probably tried to actually get the damn thing to work, but the lying blowhards who insisted that they must make the presentation or presumably be terminated.
Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India…
…I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.
However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.
The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms because a market algorithm works out that humans can be converted into gloobnar, a novel epoxy which is in great demand amongst the aliens the next galaxy over for fixing their equivalent of coffee machines. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound. However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company’s bottom line in this scenario, so we won’t pay it any more attention.
A second outcome is that it turns out that the current approach does not scale in the way that we would hope, for myriad reasons. There isn’t enough data on the planet, the architecture doesn’t work the way we’d expect, the thing just stops getting smarter, context windows are a limiting factor forever, etc. In this universe, some industries will be heavily disrupted, such as customer support.
In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I’ll use once every five years. I was actually happy about this, and it’s a real edge over other applications. But if you don’t have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don’t actually have any internal documentation worth retrieving. Fix. Your. Shit.
The final outcome is that these fundamental issues are addressed, and we end up with something that actually actually can do things like replace programming as we know it today, or be broadly identifiable as general intelligence.
In the case that generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the future. Is that clear now? Having your team type in import openai does not mean that you are at the cutting-edge of artificial intelligence no matter how desperately you embarrass yourself on LinkedIn and at pathetic borderline-bribe award ceremonies from the malign Warp entities that sell you enterprise software5. Your business will be disrupted exactly as hard as it would have been if you had done nothing, and much worse than it would have been if you just got your fundamentals right. Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this. If we thread the needle between moderate impact and asteroid-wiping-out-the-dinosaurs impact, everything will be changed forever and your tepid preparations will have all the impact of an ant bracing itself very hard in the shadow of a towering tsunami.
4. Palmer Luckey and Anduril want to shake up armsmaking – Schumpeter (The Economist)
The war in Ukraine has been a proving ground for these sorts of weapons—and for Mr Luckey’s company. He visited Kyiv two weeks into the war. “What we’ve been doing was tailored for exactly the type of fight that’s going on and exactly what we predicted was going to happen,” he argues, pointing to three lessons.
One is the importance of drones that can navigate and strike autonomously, even in the face of heavy jamming of their signals and obscurants like metal-filled smoke clouds. Many existing drones have struggled with this, says Mr Luckey, because they lack “multi-modal” sensors, such as optical and infrared cameras, to substitute for GPS, and do not have enough built-in computing power to use the latest object-recognition algorithms.
Second is the observation that software is eating the battlefield. Imagine that Russia begins using a new type of jammer. Mr Luckey says that the data can be sent back immediately to generate countermeasures, which are then remotely installed on weapons at the front line without having to change any hardware. A recent study by the Royal United Services Institute, a think-tank in London, noted that drones in Ukraine needed to have their software, sensors and radios updated every six to 12 weeks to remain viable. Anduril, claims Mr Luckey, is “literally pushing new updates…every single night”.
His third lesson from Ukraine is that weapons must be built in vast quantities—and therefore cheaply. He laments that Russia produces shells and missiles far more cheaply than America does: “The US is now on the wrong side of an issue that we were on the right side of during the Cold War.” Anduril makes much of the fact that its production processes are modelled not on big aerospace firms, but automotive ones.
5. What It Really Takes to Build an AI Datacenter – Joe Weisenthal, Tracy Alloway, and Brian Venturo
Tracy (19:48):
Can I ask a really basic question? And we’ve done episodes on this, but I would be very interested in your opinion, but why does it feel like customers and AI customers in particular are so, I don’t know if addicted is the right word, but so devoted to Nvidia chips, what is it about them specifically that is so attractive? How much of it is due to the technology versus say maybe the interoperability?
Brian (20:18):
So you have to understand that when you’re an AI lab that has just started and it is an arms race in the industry to deliver product and models as fast as possible, that it’s an existential risk to you that you don’t have your infrastructure be your Achilles heel. Nvidia has proven to be a number of things. One is they’re the engineers of the best products. They are an engineering organization first in that they identify and solve problems, they push the limits, they’re willing to listen the customers and help you solve problems and design things around new use cases. But it’s not just creating good hardware, it’s creating good hardware that scales and they can support at scale.
And when you’re building these installations that are hundreds of thousands of components on the accelerator side and the InfiniBand link side, it all has to work together well. And when you go to somebody like NVIDIA that has done this for so long at scale with such engineering expertise, they eliminate so much of that existential risk for these startups. So when I look at it and I see some of these smaller startups saying, we’re going to go a different route, I’m like, what are you doing? You’re taking so much risk for no reason here. This is a proven solution, it’s the best solution and it has the most community support go the easy path because the venture you’re embarking on is hard enough.
Tracy (21:41):
Is it like the old, what was that old adage? No one ever got fired for buying Microsoft. Is it like no one IBM? Yeah, yeah. Or IBM, something like that.
Brian (21:50):
The thing here is that it’s not even, nobody’s getting fired for buying the tried and true and slower moving thing. It’s getting fired for buying the tried and true and best performing and bleeding edge thing. So I look at the folks that are buying other products and investing in other products almost as like they’re trying, they almost have a chip on their shoulder and they’re going against the mold just to do it.
Joe (22:14):
There are competitors to NVIDIA that they claim cheaper or more application specific chips. I think Intel came out with something like that. First of all, from the CoreWeave perspective, are you all in on Nvidia hardware?
Brian (22:31):
We are.
Joe (22:32):
Could that change
Brian (22:33):
The party line is that we’re always going to be driven by customers, right? And we’re going to be driven by customers to the chip that is most performant provides the best. TCO is best supported right now and in what I think is the foreseeable future, I believe that is strongly Nvidia…
…Joe (23:30):
What about Meta with PyTorch and all their chips?
Brian (23:33):
So their in-house chips, I think that they have those for very, very specific production applications, but they’re not really general purpose chips. And I think that when you’re building something for general purpose and there has to be flexibility in the use case while you can go build a custom ASIC to solve very specific problems, I don’t think it makes sense to invest in those to be a five-year asset if you don’t necessarily know what you’re going to do with it…
…Joe (25:31):
Let’s talk about electricity. This has become this huge talking point that this is the major constraint and now that you’re becoming more vertically integrated and having to stand up more of your operations, we talked to one guy formerly at Microsoft who said one of the issues is that there may be a backlash in some communities who don’t want their scarce electricity to go to data centers when they could go to household air conditioning. What are you running into right now or what are you seeing?
Brian (25:58):
So we’ve been very, very selective on where we put data centers. We don’t have anything in Ashburn, Virginia and the Northern Virginia market I think is incredibly saturated. There’s a lot of growing backlash in that market around power usage and just thinking about how do you get enough diesel trucks in there to refill generators that they have a prolonged outage. So I think that there’s some markets where it’s just like, okay, stay away from that. And when the grids have issues and that market hasn’t really had an issue yet, it becomes an acute problem immediately.
Just think about the Texas power market crisis back in, I think it was 2021, 2020 where the grid wasn’t really set up to be able to handle the frigid temperatures and they had natural gas valves that were freezing off at the natural gas generation plants that didn’t allow them to actually come online and produce electricity no matter how high the price was, right?
So there’s going to be these acute issues that people are going to learn from and the regulators are going to learn from to make sure they don’t happen again. And we’re kind of siting our plants and markets where our data centers and markets where we think the grid infrastructure is capable of handling it. And it’s not just is there enough power? It’s also on things.
AI workloads are pretty volatile in how much power they use and they’re volatile because every 15 minutes or every 30 minutes, you effectively stop the job to save the progress you’ve made. And it’s so expensive to run these clusters that you don’t want to lose hundreds of thousands of dollars of progress. So they take a minute, they do what’s called checkpointing where they write the current state of the job back to storage and at that checkpointing time, your power usage basically goes from a hundred percent to like 10% and then it goes right back up again when it’s done saving it.
So that load volatility on a local market will create either voltage spikes or voltage sags. A voltage sag is what you see is what causes a brownout that we used to see a lot of times when people would turn their air conditioners on. It’s thinking through, okay, how do I ensure that my AI installation doesn’t cause a brownout when people are turning during checkpointing, when people are turning their air conditioners on?
That’s the type of stuff that we’re thoughtful around, how do we make sure we don’t do this right? And talking to engineer NVIDIA’s engineering expertise, they’re working on this problem as well, and they’ve solved this for the next generation. So it’s everything from is there enough power there? What’s the source of that power? How clean is it? How do we make sure that we’re investing in solar and stuff in the area to make sure that we’re not just taking power from the grid to also when we’re using that power, how is it going to impact the consumers around us?
Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Apple, Microsoft, and Tencent. Holdings are subject to change at any time.