What We’re Reading (Week Ending 29 October 2023)

What We’re Reading (Week Ending 29 October 2023) -

Reading helps us learn about the world and it is a really important aspect of investing. The legendary Charlie Munger even goes so far as to say that “I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading.” We (the co-founders of Compounder Fund) read widely across a range of topics, including investing, business, technology, and the world in general. We want to regularly share the best articles we’ve come across recently. Here they are (for the week ending 29 October 2023):

1. CEO/CIO’s Final Investment Note: The Best of Times and The Worst of Times – Chuin Ting Weber

While there is sadness in the hearts of all of us at MoneyOwl, we know that the ups and downs of our journey are but a faint reflection of our larger condition – as a human race, as countries, as societies and as individuals. We all face circumstances that we cannot control, try as we may to do so. The shock of the Israel-Hamas conflict and the accompanying humanitarian disaster, the ongoing Russia-Ukraine war, and the gyrations in big economies both East and West, threatens to shake us and tempt us to despair. On an individual level, some of us may face unexpected shocks, tough times, or just unsettling uncertainties as we move from one season of life to another.

Yet, there must always be some beliefs in our lives that anchor us, so that our core will not be shaken. And as it is with our lives, so it is with investing. Whatever is happening around you and in the world, please remember that the human spirit for recovery and progress, has never been quenched by wars, pandemics, natural disasters or man-made crises. COVID-19 was the most recent example, but it was neither the first crisis we have overcome, nor would it be the last. As J.S. Mill put it, writing in a period that Charles Dickens described as both the best and the worst of times:

“What has so often excited wonder, is the great rapidity with which countries recover from a state of devastation…… An enemy lays waste a country by fire and sword, and destroys or carries away nearly all the moveable wealth existing in it: all the inhabitants are ruined, and yet in a few years after, everything is much as it was before.”

John Stuart Mill, “Principles of Political Economy”, 1848

When you invest in a globally diversified portfolio of stocks and bonds – instruments that companies and countries issue to finance their economic activities – what you are really investing in, is the future of human enterprise. It is a vote of confidence in the human race. In the long run, stock prices are driven by earnings, and earnings, by the increase in global aggregate demand, which is in turn driven by a combination of global population growth and the quest for increase in standards of living. That is why no matter how bad the crisis, the stock market always recovers and goes up in the long run. This is the reason the stock market has a positive expected return. It is backed by logic and evidence.

The principle, however, does not apply to individual companies, sectors or even countries. It is also not so easy to read the tea-leaves to try to catch the short-term turns of ups and downs, to do better than the market’s long-term return. The best of times often follows the worst of times. We just don’t know when it turns. While being in a bad season is temporary, being out of the market because you timed it wrong, is the one sure way of missing out on the recovery.

2. Higher For Longer vs. the Stock Market – Ben Carlson

I don’t know what the bond market is thinking but it’s worth considering the potential for rates to remain higher than we’ve been accustomed to since the Great Financial Crisis. So I used various interest rate and inflation levels to see how the stock market has performed in the past.

Are returns better when rates are lower or higher? Is high inflation good or bad for the stock market?…

…Surprisingly, the best future returns have come from both periods of very high and very low starting interest rates while the worst returns have come during average interest rate regimes.

The average 10 year yield since 1926 is 4.8% meaning we are at that long-term average right now. Twenty years ago the 10 year treasury was yielding around 4.3%. Yields have moved a lot since then…

…In that 20 year period the S&P 500 is up nearly 540% or 9.7% per year. Not bad…

…The average inflation rate since 1926 was right around 3%.

These results might look surprising as well. The best forward long-term returns came from very high starting inflation levels. At 6% or higher inflation, forward returns were great. At 6% or lower, it’s still pretty good but more like average.

So what’s going on here? Why are forward returns better from higher interest rates and inflation levels?

The simplest explanation is we’ve only had one regime of high interest rates over the past 100 years or so and two highly inflationary environments. And each of these scenarios was followed by rip-roaring bull markets. The annual inflation rate reached nearly 20% in the late-1940s following World War II. That period was followed by the best decade ever for U.S. stocks in the 1950s (up more than 19% per year). And the 1970s period of high inflation and rising interest rates was followed by the longest bull market we’ve ever experienced in the 1980s and 1990s.

A simple yet often overlooked aspect of investing is a crisis can lead to terrible returns in the short-term but wonderful returns in the long-term. Times of deflation and high inflation are scary while you’re living through them but also tend to produce excellent entry points into the market…

…It’s also important to remember that while volatility in rates and inflation can negatively impact the markets in the short-run, a long enough time horizon can help smooth things out.

Regardless of what’s going on with the economy, you’ll fare better in the stock market if your time horizon is measured in decades rather than days.

3. Drawdowns – Chris Mayer

A drawdown is how much a stock price declines from its peak before it recovers.

Drawdowns are part of the life of every investor. Invariably, if you own a stock for a long time, you are going to have to sit through several…

…A few examples from the book, which was published in 2015:

  • Apple from its IPO in 1980 through 2012 was a 225-bagger. But you had to sit through a peak-to-trough loss of 80% — twice! And there were several 40% drops.
  • Netflix, which has been a 60-bagger since 2002, lost 25% of its value in a single day — four times! And there was a four-month stretch where it dropped 80 percent.
  • And Berkshire Hathaway, the best performing stock in the study, was cut in half four times.

What I found affirmed what Peter Lynch once said: “The real key to making money in stocks is not to get scared out of them.” …

…Not only do the best stocks suffer frequent (and lengthy) drawdowns, but the best investors also suffer drawdowns that would surprise most.

The aforementioned Peter Lynch, for example, had four severe drawdowns during his Hall of Fame run at Fidelity. Even though he returned a mind-boggling 29% annually, he had many drawdowns during those years, including three of more than 20% (one of which was a hair-raising 42% drop in 1987).

In summary: There is no defense against drawdowns if you are committed to a long-term, ownership approach to stocks. (Peter Lynch, by the way, was highly diversified and had a high turnover rate in his career – but still). In fact, I would go so far as to say that the ability to sit through drawdowns with equanimity is a source of outperformance. It is a competitive advantage over those that can’t. 

4. NVIDIA CEO Jensen Huang – Ben Gilbert, David Rosenthal, Jensen Huang

David: I love this tee-up of learning but not imitating, and learning from a wide array of sources. There’s this unbelievable third element, I think, to what Nvidia has become today. That’s the data center.

It’s certainly not obvious. I can’t reason from AlexNet and your engagement with the research community, and social media feed […]. You deciding and the company deciding we’re going to go on a five-year all-in journey on the data center. How did that happen?

Jensen: Our journey to the data center happened, I would say almost 17 years ago. I’m always being asked, what are the challenges that the company could see someday?

I’ve always felt that the fact that Nvidia’s technology is plugged into a computer and that computer has to sit next to you because it has to be connected to a monitor, that will limit our opportunity someday, because there are only so many desktop PCs that plug a GPU into. There are only so many CRTs and (at the time) LCDs that we could possibly drive.

The question is, wouldn’t it be amazing if our computer doesn’t have to be connected to the viewing device? That the separation of it made it possible for us to compute somewhere else.

One of our engineers came and showed it to me one day. It was really capturing the frame buffer, encoding it into video, and streaming it to a receiver device, separating computing from the viewing.

Ben: In many ways, that’s cloud gaming.

Jensen: In fact, that was when we started GFN. We knew that GFN was going to be a journey that would take a long time because you’re fighting all kinds of problems, including the speed of light and—

Ben: Latency everywhere you look.

Jensen: That’s right.

David: To our listeners, GFN GeForce NOW.

Jensen: Yeah. GeForce NOW.

David: It all makes sense. Your first cloud product.

Jensen: That’s right. Look at GeForce NOW. It was Nvidia’s first data center product.

Our second data center product was remote graphics, putting our GPUs in the world’s enterprise data centers. Which then led us to our third product, which combined CUDA plus our GPU, which became a supercomputer. Which then worked towards more and more and more.

The reason why it’s so important is because the disconnection between where Nvidia’s computing is done versus where it’s enjoyed, if you can separate that, your market opportunity explodes.

And it was completely true, so we’re no longer limited by the physical constraints of the desktop PC sitting by your desk. We’re not limited by one GPU per person. It doesn’t matter where it is anymore. That was really the great observation.

Ben: It’s a good reminder. The data center segment of Nvidia’s business (to me) has become synonymous with how is AI going. And that’s a false equivalence. It’s interesting that you were only this ready to explode in AI in the data center because you had three-plus previous products where you learned how to build data center computers. Even though those markets weren’t these gigantic world-changing technology shifts the way that AI is. That’s how you learned.

Jensen: That’s right. You want to pave the way to future opportunities. You can’t wait until the opportunity is sitting in front of you for you to reach out for it, so you have to anticipate.

Our job as CEO is to look around corners and to anticipate where will opportunities be someday. Even if I’m not exactly sure what and when, how do I position the company to be near it, to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I’m saying? But you’ve got to be close enough to do the diving catch.

David: Rewind to 2015 and OpenAI. If you hadn’t been laying this groundwork in the data center, you wouldn’t be powering OpenAI right now.

Jensen: Yeah. But the idea that computing will be mostly done away from the viewing device, that the vast majority of computing will be done away from the computer itself, that insight was good.

In fact, cloud computing, everything about today’s computing is about separation of that. By putting it in a data center, we can overcome this latency problem. You’re not going to overcome the speed of light. Speed of light end-to-end is only 120 milliseconds or something like that. It’s not that long.

Ben: From a data center to—

Jensen: Anywhere on the planet.

Ben: Oh, I see. Literally across the planet.

Jensen: Right. If you could solve that problem, approximately something like—I forget the number—70 milliseconds, 100 milliseconds, but it’s not that long.

My point is, if you could remove the obstacles everywhere else, then the speed of light should be perfectly fine. You could build data centers as large as you like, and you could do amazing things. This little, tiny device that we use as a computer, or your TV as a computer, whatever computer, they can all instantly become amazing. That insight 15 years ago was a good one.

Ben: Speaking of the speed of light—David’s begging me to go here—you totally saw that InfiniBand would be way more useful way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company. Why did you see that when no one else saw that?

Jensen: There were several reasons for that. First, if you want to be a data center company, building the processing chip isn’t the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.

A desktop computer in a data center uses the same CPUs, uses the same GPUs, apparently. Very close. It’s not the processing chip that describes it, but it’s the networking of it, it’s the infrastructure of it. It’s how the computing is distributed, how security is provided, how networking is done, and so on and so forth. Those characteristics are associated with Melanox, not Nvidia.

The day that I concluded that really Nvidia wants to build computers of the future, and computers of the future are going to be data centers, embodied in data centers, then if we want to be a data center–oriented company, then we really need to get into networking. That was one.

The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one training job is orchestrated across millions of processors.

It’s the inverse of hyperscale, almost. The way that you design a hyperscale computer with off-the-shelf commodity ethernet, which is just fine for Hadoop, it’s just fine for search queries, it’s just fine for all of those things—

Ben: But not when you’re sharding a model across.

Jensen: Not when you’re sharding a model across, right. That observation says that the type of networking you want to do is not exactly ethernet. The way that we do networking for supercomputing is really quite ideal.

The combination of those two ideas convinced me that Mellanox is absolutely the right company, because they’re the world’s leading high-performance networking company. We worked with them in so many different areas in high performance computing already. Plus, I really like the people. The Israel team is world class. We have some 3200 people there now, and it was one of the best strategic decisions I’ve ever made….

…Ben: Let’s say you do get this great 10-year lead. But then other people figure it out, and you’ve got people nipping at your heels. What are some structural things that someone who’s building a business can do to stay ahead? You can just keep your pedal to the metal and say, we’re going to outwork them and we’re going to be smarter. That works to some extent, but those are tactics. What strategically can you do to make sure that you can maintain that lead?

Jensen: Oftentimes, if you created the market, you ended up having what people describe as moats, because if you build your product right and it’s enabled an entire ecosystem around you to help serve that end market, you’ve essentially created a platform.

Sometimes it’s a product-based platform. Sometimes it’s a service-based platform. Sometimes it’s a technology-based platform. But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers and customers who are built around you. That network is essentially your moat.

I don’t love thinking about it in the context of a moat. The reason for that is because you’re now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. That network is about enabling other people to enjoy the success of the final market. That you’re not the only company that enjoys it, but you’re enjoying it with a whole bunch of other people.

David: I’m so glad you brought this up because I wanted to ask you. In my mind, at least, and it sounds like in yours, too, Nvidia is absolutely a platform company of which there are very few meaningful platform companies in the world.

I think it’s also fair to say that when you started, for the first few years you were a technology company and not a platform company. Every example I can think of, of a company that tried to start as a platform company, fails. You got to start as a technology first.

When did you think about making that transition to being a platform? Your first graphics cards were technology. There was no CUDA, there was no platform.

Jensen: What you observed is not wrong. However, inside our company, we were always a platform company. The reason for that is because from the very first day of our company, we had this architecture called UDA. It’s the UDA of CUDA.

David: CUDA is Compute Unified Device Architecture?

Jensen: That’s right. The reason for that is because what we’ve done, what we essentially did in the beginning, even though RIVA 128 only had computer graphics, the architecture described accelerators of all kinds. We would take that architecture and developers would program to it.

In fact, Nvidia’s first business strategy was we were going to be a game console inside the PC. A game console needs developers, which is the reason why Nvidia, a long time ago, one of our first employees was a developer relations person. It’s the reason why we knew all the game developers and all the 3D developers.

David: Wow. Wait, so was the original business plan to…

Ben: Sort of like to build DirectX.

David: Yeah, compete with Nintendo and Sega as with PCs?

Jensen: In fact, the original Nvidia architecture was called Direct NV (Direct Nvidia). DirectX was an API that made it possible for the operating system to directly connect with the hardware.

David: But DirectX didn’t exist when you started Nvidia, and that’s what made your strategy wrong for the first couple of years.

Jensen: In 1993, we had Direct Nvidia, which in 1995 became DirectX.

Ben: This is an important lesson. You—

Jensen: We were always a developer-oriented company.

Ben: Right. The initial attempt was we will get the developers to build on Direct NV, then they’ll build for our chips, and then we’ll have a platform. What played out is Microsoft already had all these developer relationships, so you learned the lesson the hard way of—

David: […] did back in the day. They’re like, oh, that could be a developer platform. We’ll take that. Thank you.

Jensen: They did it very differently and did a lot of things right. We did a lot of things wrong.

David: You were competing against Microsoft in the nineties.

Ben: It’s like […] Nvidia today.

Jensen: It’s a lot different, but I appreciate that. We were nowhere near competing with them. If you look now, when CUDA came along and there was OpenGL, there was DirectX, but there’s still another extension, if you will. That extension is CUDA. That CUDA extension allows a chip that got paid for running DirectX and OpenGL to create an install base for CUDA.

David: That’s why you were so militant. I think from our research, it really was you being militant that every Nvidia chip will run CUDA.

Jensen: Yeah. If you’re a computing platform, everything’s got to be compatible. We are the only accelerator on the planet where every single accelerator is architecturally compatible with the others. None has ever existed.

There are literally a couple of hundred million—250 million, 300 million—installed base of active CUDA GPUs being used in the world today, and they’re all architecturally compatible. How would you have a computing platform if NV30 and NV35 and NV39 and NV40 are all different? At 30 years, it’s all completely compatible. That’s the only unnegotiable rule in our company. Everything else is negotiable.

David: I guess CUDA was a rebirth of UDA, but understanding this now, UDA going all the way back, it really is all the way back to all the chips you’ve ever made…

…Ben: Well, as we start to drift toward the end here, we spent a lot of time on the past. I want to think about the future a little bit. I’m sure you spend a lot of time on this being on the cutting edge of AI.

We’re moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they’re creating, which has to be amazing for humanity in the long run. In the short term, it’s going to be inevitably bumpy as we figure out what that means.

What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?

Jensen: First of all, we have to keep AI safe. There are a couple of different areas of AI safety that’s really important. Obviously, in robotics and self-driving car, there’s a whole field of AI safety. We’ve dedicated ourselves to functional and active safety, and all kinds of different areas of safety. When to apply human in the loop? When is it okay for a human not to be in the loop? How do you get to a point where increasingly human doesn’t have to be in the loop, but human largely in the loop?

In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention.

You’ve seen some of the work that we’ve done, instead of scraping the Internet we, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence, generative AI.

In the area of large language models in the future of increasingly greater agency AI, clearly the answer is for as long as it’s sensible—and I think it’s going to be sensible for a long time—is human in the loop. The ability for an AI to self-learn, improve, and change out in the wild in a digital form should be avoided. We should collect data. We should carry the data. We should train the model. We should test the model, validate the model before we release it in the wild again. So human is in the loop.

There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity. Obviously, the way autopilot works for a plane, two-pilot system, then air traffic control, redundancy and diversity, and all of the basic philosophies of designing safe systems apply as well in self-driving cars, and so on and so forth. I think there are a lot of models of creating safe AI, and I think we need to apply them.

With respect to automation, my feeling is that—and we’ll see—it is more likely that AI is going to create more jobs in the near term. The question is what’s the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people because they want to expand into more areas.

So the question is, if you think about a company and say, okay, if we improve the productivity, then need fewer people. Well, that’s because the company has no more ideas. But that’s not true for most companies. If you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.

So long as we believe that they’re more areas to expand into, there are more ideas in drugs, there’s drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there are more ideas in technology, so long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.

Now you go back in history. We can fairly say that today’s industry is larger than the world’s industry a thousand years ago. The reason for that is because obviously, humans have a lot of ideas. I think that there are plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it’s likely to generate jobs.

Now obviously, net generation of jobs doesn’t guarantee that any one human doesn’t get fired. That’s obviously true. It’s more likely that someone will lose a job to someone else, some other human that uses an AI. Not likely to an AI, but to some other human that uses an AI.

I think the first thing that everybody should do is learn how to use AI, so that they can augment their own productivity. Every company should augment their own productivity to be more productive, so that they can have more prosperity, hire more people.

I think jobs will change. My guess is that we’ll actually have higher employment, we’ll create more jobs. I think industries will be more productive. Many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off their feet and get back to growth and prosperity. I see it a little bit differently, but I do think that jobs will be affected, and I’d encourage everybody just to learn AI…

…David: Well, and that being our final question for you. It’s 2023, 30 years anniversary of the founding of Nvidia. If you were magically 30 years old again today in 2023, and you were going to Denny’s with your two best friends who are the two smartest people you know, and you’re talking about starting a company, what are you talking about starting?

Jensen: I wouldn’t do it. I know. The reason for that is really quite simple. Ignoring the company that we would start, first of all, I’m not exactly sure. The reason why I wouldn’t do it, and it goes back to why it’s so hard, is building a company and building Nvidia turned out to have been a million times harder than I expected it to be, any of us expected it to be.

At that time, if we realized the pain and suffering, just how vulnerable you’re going to feel, and the challenges that you’re going to endure, the embarrassment and the shame, and the list of all the things that go wrong, I don’t think anybody would start a company. Nobody in their right mind would do it.

I think that that’s the superpower of an entrepreneur. They don’t know how hard it is, and they only ask themselves how hard can it be? To this day, I trick my brain into thinking, how hard can it be? Because you have to.

Ben: Still, when you wake up in the morning.

Jensen: Yup. How hard can it be? Everything that we’re doing, how hard can it be? Omniverse, how hard can it be?

David: I don’t get the sense that you’re planning to retire anytime soon, though. You could choose to say like, whoa, this is too hard.

Ben: The trick is still working.

David: Yeah, the trick is still working.

… Jensen: Yeah. The thing to keep in mind is, at all times what is the market opportunity that you’re engaging in? That informs your size. I was told a long time ago that Nvidia can never be larger than a billion dollars. Obviously, it’s an underestimation, under imagination of the size of the opportunity. It is the case that no chip company can ever be so big. But if you’re not a chip company, then why does that apply to you?

This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions.

One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That’s one way to think about that. However, if you build a system that, whenever needed, assisted in the driving of the car, what’s the value of an autonomous chauffeur every now and then?

Obviously, the problem becomes much larger, the opportunity becomes larger. What would it be like if we were to magically conjure up a chauffeur for everybody who has a car, and how big is that market? Obviously, that’s a much, much larger market.

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.

5. The 4 Billion Pieces of Paper Keeping Global Trade Afloat – Archie Hunter

They are relatively easy to fake. Frequently get lost. And can add huge amounts of time to any journey. Yet paper documents still rule in the $25 trillion global cargo trade with four billion of them in circulation at any one time.

It is a system that has barely changed since the nineteenth century. But that dependence on bits of paper being flown from one party to another has become a vulnerability for companies which move and finance the world’s resources around the globe.

In one high profile case, banks including ING Groep NV discovered in 2020 that they had been given falsified bills of lading — shipping documents that designate a cargo’s details and assign ownership — in return for issuing credit to Singapore’s Agritrade Resources. In another dispute, HSBC Holdings Plc and other banks have spent three years in legal wrangling to recover around $3.5 billion from collapsed fuel trader Hin Leong, which is accused by prosecutors of using “forged or fabricated documentation,” when applying for credit.

The International Chamber of Commerce estimates that at least 1% of transactions in the global trade financing market, or around $50 billion per year, are fraudulent. Banks, traders and other parties have lost at least $9 billion through falsified documents in the commodities industry alone over the past decade, according to data compiled by Bloomberg…

…Less than 2% of global trade is transacted via digital means, but that is set to change. Of the world’s top 10 container shipping lines, nine — which account for over 70% of global container freight — have committed to digitizing 50% of their bills of lading within five years, and 100% by 2030. Some of the world’s biggest mining companies including BHP Group Ltd., Rio Tinto Group, Vale SA and Anglo American Plc have voiced their support for a similar campaign in the bulk shipping industry.

The greatest barrier to that expansion has been legal. Banks, traders, insurers and shipping companies have had the means to go digital, but up to now a paper bill of lading has been the only document recognized by English law that gives the holder title ownership to a cargo. A bank or insurer won’t cover a deal that isn’t legally secure, and without financing, deals are unlikely to happen.

To address that, the UK passed the Electronic Trade Documents Act in July which enshrines digital documents with the same legal powers as paper ones. English law on trade documents goes back centuries. It underpins around 90% of global commodities and other trade contracts. So the UK law change represents a big step. Singapore, another center for maritime law, created a similar legal framework in 2021 conducting its first electronic bill of lading transactions in 2022. Similar legislation is expected in France later this year.

The next challenge will be getting companies to change processes that have been in place for hundreds of years. For all its faults, paper is something that everyone understands and while businesses are happy to join a critical mass of digital trade, few are keen to be the first to take steps in that direction…

…For now, when a cargo of coffee is shipped from Brazil to a roaster like Illycaffe SpA in Europe it sets off a flurry of printing. Three identical bills of lading need to be produced and gradually make their way between sellers, banks and buyers, stopping off at law firms and consultants in order to guarantee the rights to the cargo across its 20 day journey. There are also paper invoices, certificates of analysis, and additional documents to measure weight, origin, packing, and moisture content if it is ores that are being shipped.

It is impossible to accurately calculate how many documents are printed for a given trade route but Brazil exports over 900,000 tons of coffee to the European Union every year. And that represents a lot of paper — McKinsey estimated that at least 28,000 trees a year could be saved by reduced friction in the container trade.

As well as providing details of the cargo, its destination and origin, the documents give the holder ownership rights over whatever is being shipped, crucial for holding transport companies accountable for any damages or loss that might occur, or indeed to give banks and insurers some security when providing hundreds of millions of dollars to finance a single shipment…

…The multi-step process begins when an agent prepares the bill of lading and receives sign-off that all the details are correct from the seller, ship owner, trader and end-buyer. The ship is then loaded and original bills of lading are issued by the vessel’s owner and signed by its captain. The three original bills of lading are then released to the seller — in the Brazilian example, the coffee producer — which passes them on to its financing bank along with additional documents to receive payment. The coffee company’s bank will endorse the bill of lading by writing on the back of it.

In many cases a carrier will need to set sail before this process is complete, so the vessel’s captain will provide a shipping agent with a letter of authority to complete the documents on their behalf.

The next leg of the journey for the bill of lading is from the coffee company’s bank to the trading group’s equivalent via DHL Worldwide Express or FedEx Corp. The trader’s bank then makes the payment to the producer’s bank against the receipt of those documents. Assuming everything is okay at this stage the bank working for the trader endorses the bill of lading, signs, stamps, dates and delivers it to its counterpart representing the buyer of the cargo, which in turn pays the trader’s bank for the goods and hands the bill of lading to the master at the destination port to obtain the release of goods.

This pass-the-parcel style approach to the bureaucracy is happening in parallel with the physical goods being loaded, shipped around the world and delivered. Sometimes the documents may only need to move between a small cluster of offices in Geneva or Singapore — where companies across the supply and finance chain have set up offices to be close to one another. But often they are far more tortuous…

…Digital startups like Vakt and ICE Digital Trade offer the opportunity to transfer trade and other financial documents electronically. Bolero has been doing it since the 1990s. Oil majors, such as BP Plc and Shell Plc, traders like Gunvor Group and banks including Société Générale SA have stakes in Vakt, while Intercontinental Exchange bought essDOCS last year for an undisclosed sum, betting that the move online will accelerate.

But the lack of public examples highlights the uphill battle to full adoption of electronic bills of lading. Trafigura used essDOCS for an Australian iron ore shipment back in 2014. Taiwanese shipping line Wan Hai used Bolero’s electronic bill of lading for a polyester filament trade to China in 2018.

This low take-up is largely due to the continued lack of legal recognition in many jurisdictions and banks — which finance the cargoes in transit — but will not accept a digital bill of lading as collateral in most cases. Advocates say the UK law reform should change that.


Disclaimer: None of the information or analysis presented is intended to form the basis for any offer or recommendation. We currently have a vested interest in Apple, Microsoft, and Netflix. Holdings are subject to change at any time.

Ser Jing & Jeremy
thegoodinvestors@gmail.com