What We’re Reading (Week Ending 18 September 2022)

What We’re Reading (Week Ending 18 September 2022) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 18 September 2022):

1. Equity Duration & Inflation: Lessons From The Nifty Fifty – Sean Stannard-Stockton, CFA

In the early 1970s, a group of high quality growth stocks such as Coca-Cola, Procter & Gamble, Johnson & Johnson, Walt Disney, American Express, and Pfizer generated gangbuster returns. At the time, these stocks were referred to as the Nifty Fifty. At their peak in late 1972, these stocks traded at a PE ratio of 42x while the overall S&P 500 was trading at a PE of 19x. But in 1973 and 1974, inflation rose rapidly, a recession hit, the overall market declined, and Nifty Fifty stocks underperformed sharply.

Most investors think of the collapse of the Nifty Fifty stocks as a bubble that burst. But in fact, if you had bought these stocks at their all time highs and hung on for the long term, you would have earned a 12%+ annual rate of return, or the same as the overall market.

If these stocks proved they were not in a bubble by generating strong returns even from peak valuations that were more than double the overall market valuation, why did they perform so poorly during the 1970s and then perform so extraordinarily well in subsequent decades? And what does this history tell us about today’s market, where high quality growth stocks have traded down by so much this year after trading at relatively high valuations at the end of 2021?…

…The data shows that while Coca-Cola’s PE ratio at the end of 1972 was 46x, a huge premium to the S&P 500 PE ratio of 19x, Coke still went on to generate a 16.2% annual rate of return, beating the S&P 500’s 12.7% rate of return and demonstrating that Coke was actually cheap at 46x earnings. In fact, investors could have paid as much as 82x earnings, an eye popping PE ratio, and still have generated returns in line with the overall market.

Of course, not all the members of the Nifty Fifty did so well. Xerox also traded at 46x earnings, but it went on to generate annual returns of just 6.5%, far less than the S&P 500. The chart shows that investors would have had to pay no more than 19x earnings to generate the same returns as the S&P 500.

Why did Coke do so well, while Xerox did not, despite both trading at the same, optically high, PE ratio? The answer is in the far-right column of the chart where you can see that Coke grew earnings at 13.5% a year over the time period measured while Xerox grew earnings at just 5% a year.

In retrospect, we know that the Nifty Fifty stocks as a group performed just fine even from peak earnings multiples of more than twice the valuation of the S&P 500. Many members of the Nifty Fifty went out to produce gangbuster, multidecade returns even if you bought them at peak valuations before they temporarily collapsed.

So, if they were not overvalued, why did they underperform so badly during the 1970s, and why did their performance turn around in 1980 to begin a period of massive outperformance that recovered all of their prior underperformance?

The reason was the stagflation that began in 1973 was not fully brought under control for a decade…

…What the chart shows is that the overall market fell very dramatically from 1972 through 1974, during which time inflation raced up to over 12% and the economy entered a prolonged recession. But once inflation peaked, and began to decline, the market rallied dramatically. However, the victory over inflation was not complete and it was relatively short lived. Inflation at the end of 1976 had only retreated to 5%.

By 1977, inflation was rising again and this time it was embarking on six-year period of high inflation that reached 15%. To finally bring inflation back down to more sustainable levels, Fed Chairman Paul Volker pushed the Fed Funds interest rate to an incredible 20%, triggering two back to back recessions that lasted for a total of three years.

Over the 10-year period from 1972 to 1982, the overall market returned 6.7% per year while Nifty Fifty stocks did even worse. The PE ratio of the overall market averaged just 9x, with Nifty Fifty stocks retained premium multiples, but at low absolute levels in the mid to low teens.

But once inflation was beaten, the market began a huge multidecade bull run, with Nifty Fifty stocks leading the way and eventually recouping all their underperformance from the stagflationary decade.

The lesson is clear. When inflation rises to high levels and investors come to believe that inflation will remain high for a long period of time, PE ratios fall sharply with the valuations of long duration growth stocks falling by more than average. But once inflation begins to retreat and investors come to believe that it will remain at lower levels on a sustainable basis, valuations quickly reset to higher levels, with the valuation of long duration growth stocks leading the charge higher.

The recovery of long duration, growth stock performance did not wait for the victory over inflation to be final. Rallies in the 1970s occurred once inflation began trending down even as it remained at high levels. By 1985, inflation was still running at nearly 4%, yet the equity market had doubled over the past three years, with Nifty Fifty stocks doing even better.

2. Twitter thread on the development of file-based software and real-time-collaboration software – Amal Dorai

I know this because I founded a startup, LiveLoop, to make MS Office real-time collaborative. Microsoft acquired us in 2015 and has invested heavily in making Office real-time collaborative, and Office collaboration is now much better than in 2016. But it’s still not Google Apps.

There are two major barriers to making a legacy application into a multiplayer application: the file format and its ecosystem, legacy clients and backwards compatibility, and the pure technical challenge of multiplayer. Let’s start with the file format:

Legacy applications speak “file.” Photoshop users have thousands of .psd files in their archives, they share .psd files with Dropbox, they send .psd files as email attachments, and they have .psd files with names like “Billboard_Final_v26_REALLYFINAL_v6.psd.”

The file-based semantics enable users to work with their data offline, and maintain custody of their own data (You can’t lose access to your PowerPoint files on your desktop, but you can lose access to your Google account). But the file has two major limitations:

First is that a legacy file format like PSD has a multi-thousand page specification, with none of it designed around multi-user applications. You can’t replace the file format outright, because then older versions of Photoshop wouldn’t be able to access the new file format.

So changing the file format requires shoehorning in the functionality required for real-time collaboration, in a way that older clients ignore. This is a solvable, but what’s not solvable is that the entire concept of a file is incompatible with multi-user applications:

A file only needs to change when a user opens it, but a collaborative multi-user application has to be editable at any time by any user, so it resides on a server and is accessed over the network. So any file that you store outside of the system becomes immediately obsolete.

Multiplayer applications like Google Apps and Figma avoid this by entirely rejecting the concept of the file . You can’t download a Google Doc – you can only download a snapshot of it as a PDF. You can’t “edit locally” in Google Docs.

Yes, this means that multiplayer applications don’t work offline. (“Merging” a local copy with an online copy is too complex for most users.) Existing applications see this as a dealbreaker because it would be a takeback of functionality that users are accustomed to.

But Google Apps, Figma, etc. have shown that while users may not allow Adobe to take away offline editing, they will use a competitor that doesn’t offer it. As connectivity gets more ubiquitous and collaboration becomes more important, offline support is a relic of the past.

Users of file-based apps also often have workflows that are completely based around the file. Salespeople, for example, build presentations in PowerPoint and then share and present them with third-party tools like ClearSlide and Seismic which understand the .PPTX file format.

Instead of a file format, a multiplayer application like Figma enables third-party workflows with a developer API, and users grant permissions to third parties, which then read or modify the document directly on Figma’s servers.

Building a developer API is technically hard (https://figma.com/blog/how-we-built-the-figma-plugin-system/) and building developer trust program is even harder. But legacy apps ask developers to migrate to APIs right after they’ve destroyed their file-based business models, when trust is low…

…Finally, technical debt. It’s tremendously hard to add multiplayer to an app that wasn’t designed with it in mind. Instead of trying to add wheels to a boat to make it drive on land, you’d be better off melting it down and building a car.

3. How the Worst Market Timer in History Built a Fortune – Charlie Bilello

Wally didn’t think about investing again until he turned 25, when he inherited a sizable trust worth $130,000. However, the trust came with some strict rules:

  1. He could only invest in a diversified equity portfolio (S&P 500, no individual stocks) and was not allowed to withdraw any of the funds until his 91st birthday.
  2. He could determine when and how much of the $130,000 to invest over time but once he bought into the market, he could not sell a single share until his 91st birthday. All dividends were to be reinvested immediately back into the market.
  3. He was not allowed to see the account balance of his stock portfolio until he turned 91.
  4. The non-invested portion of the $130,000 would be held in a local bank account earning no interest.

Initially, Wally was hesitant to do anything, but just before his 26th birthday he couldn’t wait any longer. The stock market had been booming for years and Wally feared missed out on the riches that everyone seemed to be talking about.

So on August 2, 1956, Wally invested $10,000 into the market. This was, of course, the day the stock market topped. It would proceed to decline 21% before bottoming in October 1957, and while Wally wished he could sell everything at that point, the rules of the trust prevented him from doing anything.

And so he forgot about investing for a while, until another raging bull market took hold, and he couldn’t resist any longer. On December 12, 1961 Wally invested another $10,000 at, you guessed it, another market top.

Over the next 60 years, he would repeat this pattern over and over again, only adding new investments at bull market tops: $10,000 in February 1966, December 1968, January 1973, November 1980, August 1987, July 1990, July 1998, March 2000, October 2007, September 2018, and lastly in February 2020. His $130,000 was now fully invested in the S&P 500…

…In August 2021, Wally turned 91, and he was finally able to view his investment account balance and sell his shares if he so chose. While curious to see what was now in the account, he decided to hold off until New Year’s Eve, hosting a small gathering of friends and family that would include his now-famous twin brother.

At the New Year’s Eve party, all eyes were on Wally as he opened the envelope containing the latest statement for his investments. When he saw the number, his jaw dropped. He thought there must have been a mistake. Wally, the worst market timer in history, had amassed a fortune of $18.6 million. This was a 143x increase from the initial $130,000 and a 10.5% dollar-weighted annualized return.

But there was no mistake, the numbers were real. Wally was just the living embodiment of the old adage that time in the market is vastly more important than timing the market. By diversifying, reinvesting dividends, and never selling, Wally had reaped the enormous rewards of long-term compounding.

4. Why AI is not a Moat – Shomik Ghosh

Here I want to address why AI is not a moat for startups. But also, when it can be.

At its core to do AI/ML/any predictive modeling, you first need data. And this is where the plot thickens. For any startup, you are starting with 0 users, 0 data, 0 revenue. Your goal is to get all of those to grow quickly by delivering a product that users want and solve clear pain points. When AI is pitched in your product as the core differentiation, herein lies the problem.

You are pitching the main reason that your product should exist as a byproduct of having lots of data. That data in turn needs to be ingested, cleaned and then put into a model which then needs to be trained, tuned, and deployed. Meanwhile, while building all of that, you also need to get the fundamental input to your product that drives all of this….data.

That’s the dilemma. The core product differentiation is AI which rests on having enough data to deliver outcomes to end users, however there are no end users currently. Even if you train models on publicly available data, that data likely does not mimic the end user pain points which leads to false positives and a poor end user experience.

A startup must first and always focus on what pain point is being solved. So if you are trying to build a better code generation product, that’s great. But it needs to be fairly tightly scoped so that the cost and availability of the data needed to deliver a great experience is as small as possible. This could mean focusing on one language, one area of software engineering, or even one specific type of engineer. But if you are going to pitch AI as a core product in and of itself, focus on scoping it down as narrowly as possible otherwise it’s going to be hard to create a sustainable business.

When can AI be a moat?

The answer is in larger companies! They already have mounds of data flowing in through the system because they solved a clear pain point for the user, have years of data on how the product is being used, and then can train models to better implement the product solutions for end users. Security is a great example of this. Cloudflare can utilize AI/ML for DDOS protection because they see so many attacks across different regions and methods. Crowdstrike and SentinelOne can do the same as they aggregate huge amounts of endpoint data showing different attacks on all sorts of laptops, phones, etc. However, and this is key, Crowdstrike can not offer better DDOS protection than Cloudflare because they don’t see the same volume of data in that area and vice versa for Cloudflare with endpoint protection. So AI can indeed be a moat but is restricted to more “vertical type” product areas.

5. Human genomics vs Clinical genomics – Eric Topol

We’re now well over 20 years since the first human genome was sequenced, but with few exceptions the massive amount of data that has been generated has not been transformed to routine patient care.

Over 30 million people have had their genome sequenced (exome or whole genome). The NIH All of Us research program released 100,000 whole genome sequences in March. (Disclosure: I am an investigator in the All of Us Research Program and an advisor to Illumina). The emphasis here is on research programs, since there has been very little use of whole genome sequencing in clinical practice…

…Like many of you, I’ve had a whole genome sequence and found it to be of limited value. Why? Because the data assessed is not provided in a highly informative, user-friendly way. I want to see my major categories of meaningful data on my phone, which includes pharmacogenetics, polygenic risk scores, rare pathogenic variants, and carrier states. If I get a new prescription, I’d be able to quickly and simply review the data, share it with a pharmacist, to decide about my gene-drug potential interactions. These categories could easily be expanded. For example, we know a great deal about cancer predisposition genes from whole genome sequencing, but such data is not available. It sits in top-tier research publications.

That exemplifies the problem. For decades, the genomics research community has been in high gear productivity, extensively publishing data about medical conditions. But so little has reached clinical practice outside of a few exceptions like rapid neonatal sequencing or adults with serious, undiagnosed conditions, or in specific use cases in cancer. Even those are only performed by a limited number of health systems…

…At least at this point, the “genome revolution” as proclaimed frequently and in the Nature cover (at top of this post), really seems to be confined to the research domain. The bottleneck in getting this into mainstream medicine used to be considered cost, but that excuse is likely to become untenable with many routine lab tests and scans at higher cost and potentially less informative. There is the problem of paternalism—the research community is afraid of “letting go” —believing the public is incapable of dealing with genomic data, such as variants of uncertain significance and the issues of probabilistic vs deterministic meaning of variants. Beyond these issues, the medical community has little grounding in human genomics. There are very few medical geneticists and genetic counselors relative to the population, about 1,500 and 6,000, respectively, for >330 million Americans. Education for all clinicians is vital but there has been no commitment or successful initiative at scale. This is unlike the National Health Service in the United Kingdom, the country leading the world in genomics, which has a major division known as Health Education England, that is responsible for educating clinicians about genomics.

During the pandemic we saw the power of pathogen genomic sequencing to detect major SARS-CoV-2 variants and track the virus spatiotemporally through the world, no less to do so in specific outbreaks. The relationship between the variants and specific treatments with efficacy (or lack thereof) for potentially lifesaving monoclonal antibodies was firmly established. Perhaps this will help facilitate the acceptance of medical genomics in the future.

The miscue that our genome sequence is our “operating instructions” must also be addressed, fully acknowledging that DNA sequence represents just 1-layer depicting human uniqueness, and does not by itself reveal the depth of information derived from all the other layers that include the transcriptome, proteome, epigenome, microbiome, immunome, physiome, anatome, and exposome (environment). Many of these layers are cell and tissue-specific (transcriptome, epigenome) or site-specific (microbiome), further emphasizing the complexity.

6. Mitch Lasky – The Business of Gaming – Patrick O’Shaughnessy and Mitch Lasky

[00:08:02] Patrick: What does make a game fun? If you had to extract out the dimensions of fun in games as you’ve thought about it, what are they?

[00:08:07] Mitch: I’ll share one of them with you because I don’t want to give away all my trade secrets. One of the things that I concentrate on that I think is overlooked quite a bit is what you do the most frequently in the game. So for example, what do you do in the game Doom, game that almost everybody of our age has played. They’ll say it’s a shooter. You run around and you shoot things. Yeah, you shoot things, but primarily what you do is you move in a three dimensional environment through a maze, basically. You’re moving through the maze and looking around. Then, you’re occasionally shooting things. You’re occasionally finding keys, opening doors, finishing levels, and then progressing. For me, focusing on that high frequency activity, what is the thing in the game that you do the most frequently and is that pleasurable?

Because a lot of times I will have teams come in and pitch me and they’ll say, “I want to make a game about World War I.” And I’m like, “Okay, so what do you do in the game?” And they’re like, “Well, it’s World War I. You’re in the trenches and then you’re fighting,” and really try and drill down and say, “Okay, what’s the first five minute experience? What am I doing constantly in the game that’s creating that feedback loop of pleasure?” That is a really key component of it because I think that most of the products I see that fail have a mismatch between the constancy of the activity and the pleasure of that activity…

[00:19:57] Patrick: From having been such an intimate part, an early investor in Riot, the business, over time. What do you think Riot can most teach non-gaming businesses about business in general?

[00:20:08] Mitch: One of the key insights that Riot made early on was super service of their community. They got into their community early. The founders were present in the online communities. They did a lot of work listening to their consumers. And again, I don’t think there’s anything particularly novel about this. They really listened to what their consumers were telling them about their product, what their users were telling them, and they designed specifically to those requirements. I saw that as well with Discord, a company that I was lucky enough to invest in when it wasn’t even Discord, and to help pivot to the platform that it is today, with all the success that it’s had.

But very early on, we were on Reddit. The founders were in there and they were having a dialogue with their audience. What is it that’s going to make this more interesting to you? What is it that you don’t like about it? And all businesses can benefit from that. Again, I don’t think it’s particularly insightful, but it was a key element and it required a lot of investment by the founding team to really get in there and hear both the good things and the bad things about their product.

[00:21:10] Patrick: You said early on that we’ve moved from the original publisher model to the more platform based model, but also from the shiny disc to the instant download and the much lower cost of goods for delivering games. With that in mind, talk about the way that monetization has changed over that time, both in terms of dollars and in terms of margin. Because one of the things you said to me early on is, “Do you imagine everyone’s buying the same $60 disc? The person that loves that game the absolute most is paying the same as someone that tries it for five minutes and quits.” That’s very different than the modern free to play world where you can have a much bigger demand curve if you will. And you monetize users at very different levels and very different business models. So maybe talk us through that evolution of how much people pay, how they pay and the margins that result from that in the gaming business.

[00:21:57] Mitch: This is, in my opinion, the key insight for understanding the modern games business, this change from an inelastic pricing model to an elastic pricing model. I’m a big soccer nut, and I’m going to, every year go to Target or Best Buy or whatever, and buy a copy of FIFA from Electronic Arts. And I’m going to put it in my PlayStation 5 or my Xbox, and I’m going to play it for a zillion hours. In the meantime, this has changed a little bit recently because Electronic Arts and some of the other console publishers have gotten hip to the idea of downloadable content and ways to monetize people like me outside of that initial purchase. But historically, that was it. You got my 60 bucks and that was all you were going to get from me. Meanwhile, my friend down the street goes to the same store, buys the same disc for $60, plays it for 10 hours, sticks it in a sock drawer and they’re done. Yet, essentially, Electronic Arts has basically treated us as if we were identical users. And I think starting in 2005 with some early games out of Nexon, KartRider out of Korea, a game that people inside the games industry recognize as being one of the most influential products of all time, but people outside the industry haven’t really paid that much attention to it, or don’t even really know about it. It was basically a Mario Kart like cartoony racing game, where the company gave the entire game away for free. It’s not like shareware, where you get three levels and then you have to unlock to continue to play it. It wasn’t crippleware.

It wasn’t any of these earlier paradigms. It was the whole enchilada. You got the entire game for free. And the way they monetized was they would sell you cosmetic items and other things to enhance your experience, to give you status within the world, other things like that. And it was a massive success and it proved to the industry that you could use this giving the game away for free, which is one of the great marketing advantages of all time, when you can basically say to your consumer, “Hey, you don’t have to pay me and you can have every good part about this product,” and push the monetization downstream to a point where you’ve already hooked these people into the experience. And then, you can monetize them in an elastic manner. So the 10 hour person that we talked about earlier is going to pay you potentially or not pay you as the case may be. But me, the thousand hour person is going to pay you insane amounts of money and thus the rise of the modern whale. If you want to talk later about what’s happening in the advertising business and the customer acquisition business right now with Apple and Facebook wars that are going on because it’s having a devastating effect on that part of the video game business. That is a hugely important maneuver in the video game business.

And again, it’s something that, when Riot games pitched us at Benchmark, initially the concept of doing a game for the core… League of Legends is not a casual product. League of Legends is a hardcore product. The concept of using the KartRider monetization method, which is give the whole game away for free and then virtual goods, virtual items to cosmetic items to new characters, et cetera, downstream was really radical. There was a very strong sense at the time that, okay, yeah, it’s going to work in casual, but it’s never going to work in the core. The core is just going to abhor the concept of not getting everything and not paying up front, et cetera. And of course, they were completely wrong, and League of Legends went on to generate $1 billion a year in revenue for a decade, one of the most successful video games, if not the most successful video game in history…

[00:37:03] Patrick: If you think about the step beyond mobile, do you think there is one. Everyone’s been talking about AR and VR for a very long time. There have been some examples, but VR doesn’t seem to have a game that’s completely taken off yet. There doesn’t seem to be real evidence that mobile is on its way out. What do you think happens beyond mobile or is the current lineup of console, PC, mobile, and its relative market share, probably a pretty mature and long lasting thing?

[00:37:29] Mitch: I was an early VR curmudgeon. I was cautioning the industry famously back at Casual Connect right around the time of what is now Meta, then Facebook’s purchase of Oculus, that it will take a long time for VR to become a mass market phenomenon. I was an early mobile pioneer. I started a mobile games company in 2000. This is seven, eight years before the iPhone and 10 years before broad access to in-app purchase, which was really the catalyst for the mobile games business in a lot of ways. I’m a guy who’s gotten out there and taken arrows on the frontier before. I was very cautious because I believe that the experience of VR, while visceral and primal in a lot of ways, you put on the headset, you’re in the Jurassic World or wherever you are, there’s something incredibly visceral about that. And I don’t discount the power of the experience, but man, there are a lot of drawbacks to it like you’re shielded from the rest of the world. You’re in your own little environment. You’re wearing these hot and heavy glasses on your head. The computational power of them is still pretty primitive. The graphical quality of them is still pretty primitive.

I hate to say I was right, but boy was I right? They dumped 10 billion into this. They just announced a couple months ago they sold what 14, 15 million units of the Oculus. Some of these games are touting the fact that they have a million monthly active users. It’s like, I’ve got games with a million concurrent users. It’s really stillborn in a lot of ways. And again, there have been success stories. Rec Room, very interesting product, right? Beat saber, I would argue is probably the mist of virtual reality in the sense that it’s selling one to one with the active user base. But when they’re talking about, oh, there’s 120 apps that have made a million dollars, well, you know what, at a $20 to $30 price point, that’s like 30,000 to 50,000 units. That’s failure. I’m not long VR in the short term. With that 10 billion investment, you see the results of it and it’s horizon, it’s a joke. It’s just not compelling as a user experience. They’re going to grind. They’ve obviously made a commitment to this and over time, maybe they’re going to get it right. And maybe the Oculus 3 or the Oculus 4 is going to take off. But I wouldn’t be, as an investor, lining up at the gates, getting ready to buy tickets on this ride…

...[00:42:26] Patrick: Both those games you mentioned are probably in that category of what you called forever games. Now those games have been around forever. My son, who’s eight, plays Minecraft all the time and I’m sure it was around before he was born. I’m really interested in what you’ve learned about the features of games that allow them to be forever games. Because if I think about the analogs you brought up earlier in the entertainment world, IP is amazing. You can keep milking the Star Wars or the Marvel or the whatever, but in some ways, it does feel like a bit of a depleting oil well. The marginal Marvel property is just not as good in the last several years. The marginal Star Wars one, in my opinion, the same thing. Whereas League of Legends seems pretty constant in terms of what it is as a thing and as popular as ever. So maybe this is the most valuable form of IP, because it doesn’t require this constant rolling new stuff out that’s drastically different. So what are those features that seems like if you could own any IP, it would be one of these forever games? What is common that allows for a game to be a forever game?

[00:43:24] Mitch: I have five attributes that I look for in these things. I’m not going to share them with you because I’m still an investor in this business and I like to not necessarily open the kimono completely. But I will share some things. So first of all, I think we have a somewhat distorted view of the longevity of games based on the games business. If you look back historically at very successful play patterns, they have incredibly long lifespans. You could have sat down with Leonardo DaVinci and played a credible game of chess. It hasn’t changed that much since the 14th century. You’d have to teach him a couple of rules, but basically it’s the same game. Backgammon is thousands of years old and it’s still played pretty much in the same manner that it was played thousands of years ago. Even more modern examples, whist, which we think of today as contract bridge, is 400 years old. Poker is probably 150 years old. You could have sat down with a frontier cowboy in 1850s Kansas and played poker pretty much the same way you play poker today.

Fun game play patterns are incredibly durable. The nature of consumption in the video game business has perverted our understanding of that. It’s made us think that there’s got to be massive turnover and innovation constantly. If you strip away the companies and the IP, and you really look at the play patterns, things like first person shooters, adventure games, massively multiplayer online role playing games, MMOs go back to the DikuMUDs that were invented in Copenhagen back in the early ’90s. And frankly, they haven’t changed that much. We’re still using that same design pattern to create modern MMOs. So I think it starts with that, an incredibly durable, replayable design pattern. Community is an aspect of it. And the games that have survived the longest have the most vibrant and thriving communities built around them. There are definitely things you can look for when you’re looking for a design that could be one of those forever games.

7. What Makes Your Brain Different From a Neanderthal’s? – Carl Zimmer

Scientists have discovered a glitch in our DNA that may have helped set the minds of our ancestors apart from those of Neanderthals and other extinct relatives.

The mutation, which arose in the past few hundred thousand years, spurs the development of more neurons in the part of the brain that we use for our most complex forms of thought, according to a new study published in Science on Thursday.

“What we found is one gene that certainly contributes to making us human,” said Wieland Huttner, a neuroscientist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and one of the authors of the study.

The human brain allows us to do things that other living species cannot, such as using full-blown language and making complicated plans for the future. For decades, scientists have been comparing the anatomy of our brain to that of other mammals to understand how those sophisticated faculties evolved.

The most obvious feature of the human brain is its size — four times as large as that of chimpanzees, our closest living relatives.

Our brain also has distinctive anatomical features. The region of the cortex just behind our eyes, known as the frontal lobe, is essential for some of our most complex thoughts. According to a study from 2018, the human frontal lobe has far more neurons than the same region in chimpanzees does…

…In recent years, neuroscientists have begun investigating ancient brains with a new source of information: bits of DNA preserved inside hominin fossils. Geneticists have reconstructed entire genomes of Neanderthals as well as their eastern cousins, the Denisovans.

Scientists have zeroed in on potentially crucial differences between our genome and the genomes of Neanderthals and Denisovans. Human DNA contains about 19,000 genes. The proteins encoded by those genes are mostly identical to those of Neanderthals and Denisovans. But researchers have found 96 human-specific mutations that changed the structure of a protein.

In 2017, Anneline Pinson, a researcher in Dr. Huttner’s lab, was looking over that list of mutations and noticed one that altered a gene called TKTL1. Scientists have known that TKTL1 becomes active in the developing human cortex, especially in the frontal lobe.

“We know that the frontal lobe is important for cognitive functions,” Dr. Pinson said. “So that was a good hint that it could be an interesting candidate.”

Dr. Pinson and her colleagues did initial experiments with TKTL1 in mice and ferrets. After injecting the human version of the gene into the developing brains of the animals, they found that it caused both the mice and ferrets to make more neurons.

Next, the researchers carried out experiments on human cells, using bits of fetal brain tissue obtained through the consent of women who had abortions at a Dresden hospital. Dr. Pinson used molecular scissors to snip out the TKTL1 gene from the cells in the tissue samples. Without it, the human brain tissue produced fewer so-called progenitor cells that give rise to neurons.

For their final experiment, the researchers set out to create a miniature Neanderthal-like brain. They started with a human embryonic stem cell, editing its TKTL1 gene so that it no longer had the human mutation. It instead carried the mutation found in our relatives, including Neanderthals, chimpanzees and other mammals.

They then put the stem cell in a bath of chemicals that coaxed it to turn into a clump of developing brain tissue, called a brain organoid. It generated progenitor brain cells, which then produced a miniature cortex made of layers of neurons.

The Neanderthal-like brain organoid made fewer neurons than did organoids with the human version of TKTL1. That suggests that when the TKTL1 gene mutated, our ancestors could produce extra neurons in the frontal lobe. While this change did not increase the overall size of our brain, it might have reorganized its wiring.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. Of all the companies mentioned, we currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Facebook), and Microsoft. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com