The ROI of AI (It's a Dollar Auction)
The hardest question in the world is starting to impact share prices. What exactly is the ROI of AI?
The single most complicated question to answer is what exactly the ROI of AI is. You can easily argue that there is some value and that it can be done quite profitably when you are just inferencing the model for money. The problem is that when you peer a little closer at the training costs, particularly the cost of scaling models at the frontier, it’s starting to become hard to answer. What exactly is the ROI?
What’s more, in recent weeks, we’ve seen some of the first signs of people questioning the spending that’s being pursued by the hyperscalers and AI labs to make frontier models. So what gives? Today’s note will recap the hyperscale capex spend (so far) and my stab at addressing the question. I don’t exactly have an answer.
Recent Questions about Investment ROI
First, it started with Zuck's comments in the Bloomberg interview, where he openly addressed the possibility that AI could be a bubble. It's valuable, but there is a real chance of overspending.
I think bubbles are interesting because a lot of the bubbles ended up being things that were very valuable over time and it's just more of a question of timing, like you're asking, right? Even the dot com bubble, you know, it's like there's all this fiber laid and it ended up being super valuable, but it just wasn't as valuable as quickly as people thought.
So, is that gonna happen here? I don't know. I mean, it's hard to predict what's gonna happen in the next few years. I think Al is gonna be very fundamental. I think that there's a meaningful chance that a lot of the companies are overbuilding now and that you look back and you're like, Oh, we maybe all spent some number of billions of dollars more than we had to. But on the flip side, I actually think all the companies that are investing are making a rational decision because the downside of being behind is that you're out of position for, like, the most important technology for the next 10 to 15 years.
This almost confirms that the spending could have a horrible near-term ROI, but if you believe in the long run, you should do it anyway.
What’s more, Google's Sundar Pichai said on the earnings call that the risk is to invest too little, not too much.
I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over-investing, we clearly -- these are infrastructure which are widely useful for us. They have long useful lives, and we can apply it across, and we can work through that. But I think not investing to be at the front here, I think, definitely has much more significant downside.
And the blind belief in scaling laws for better models is essential. They must improve, or all this capex will probably not work out.
To your second question on whether -- how do the scaling laws hold? Are we hitting on some kind of wall or something? Look, I think we are all pushing very hard, and there's going to be a few efforts which will scale up on the compute side and push the boundaries of these models. What I would tell is regardless of how that plays out, I still think there is enough optimizations we are all doing, which is driving constant progress in terms of the capabilities of the models. And more importantly, taking them and translating into real use cases across the consumer and enterprise side, I think on that frontier, I think there's still a lot of progress to be had. And so we are pretty focused on that as well.
Meanwhile, Microsoft keeps highlighting that demand signals drive the investments and are still supply-constrained. So, while there are many questions, at the very least, Microsoft is trying to say they are reactive to the market.
So as we begin FY '25, we will continue to invest in the Cloud and AI opportunity ahead. aligned and if needed, adjusted to the demand signals we see. We are committed to growing our leadership across our Commercial Cloud and within that, the AI platform, and we feel well positioned as we start FY '25.
This includes both AI demand impacted by capacity constraints and non-AI growth trends similar to June. Growth in our per user business will continue to moderate.
With all this spending and the race to not be left out in the next platform, I think there is a much simpler and human framing of the AI capital cycle (or bubble), and that’s viewing it as a dollar auction.
Framing Hyperscaler Capex as a Dollar Auction
I think some of the spending is due to concern about locking out the next generation of technology. As Sundar said, the risk of underinvesting is much higher than that of overinvesting. If you are locked out of the next platform because your competitor owns it, your business will be relegated to taking a secondary seat in the AI wave.
I think Mark at SIGGRAPH put it best. He talked extensively about the pain of selling your product on someone else’s platform, and at the thought of having to use someone else’s models for his own business, he said, “fuck that.” That is how vehemently he felt about being trapped on someone else’s platform.
The problem is that everyone is racing to that next big thing. Whoever is first to the model that creates a killer app so good that it makes information processing (Microsoft), information search (Google), or sharing things online (I guess Meta) obsolete, then the AI Capex problem is best explained as a dollar auction. And your risk of not investing enough is much worse than investing a bit too much.
Whoever wins will get the entire dollar, but you might want to pay over a dollar to minimize your loss. If you’re bidding 95 cents and think you might lose, paying, say, a dollar and 15 cents to ensure the dollar and bear a 15-cent loss could make sense. Bidding below the dollar at the risk of losing gives you a chance to lose the entire investment. Refer to this helpful dinosaur graphic about a dollar auction's bidding logic.
Competitive dynamics create many adverse impacts if viewed from a loss mitigation perspective. As the five largest tech companies in the world have everything to lose and not much more to gain, this can distort the auction dynamics. The “risk” of underinvesting is seen as losing the dollar auction, and a guaranteed win you might have overpaid for is much better than not getting anything at all. The comments from Mark Zuckerberg and Sundar confirm this.
But besides the twisted dollar auction logic, I have something I keep asking myself about, and it's the massive consensus that AI is going to be as game-changing as many people think. I struggle with this one because are we even bidding for a dollar? I keep returning to history, and today, I will drop a little history lesson analogy that I think is appropriate. Think of this as a more practical update to my telecom bubble post.
When you look at history, one of the best ways to think about it is the clumsy analogies used then and what the skeptics said. We are finally getting skepticism (thank God), which helps me better understand “where we are” in this capital cycle.
But the funny little period I keep coming back to is the Information Superhighway phase of the Internet. The world, too, was grasping what the impact of the internet could be.
The Information Superhighway (Early Internet)
Sometimes, when I listen to the conversation about AI, I struggle to understand its killer use. We know that it’s a transformative technology, and I use it for editing, drafting, idea generation, and, frankly, a more pointed answer machine in the case of search. It doesn’t feel ready for whatever hopes are pinned on it, but then again, neither did the internet in the early days.
I am comforted by the clumsy term “information superhighway” that emerged in the early 1990s. Most of the vision of the internet was put succinctly and correctly, but I don’t think a killer app was precisely described in this paper in 1994. The number of possible applications is endless, and many have yet to be thought of, yet this, in broad strokes, got the internet “right.” This is similar to how I feel like we are grasping at AI today.
Today, there seems to be a consensus that AI is the next big thing. One of my favorite (locked) Twitter (or X) accounts keeps pointing out that they are unsure if they’ve ever seen such a strong consensus. To not believe in AI feels like blasphemy at the corporate level. Unlike the internet, where adoption happened from the bottom up at corporates, it seems to be happening from the top down, according to CIO surveys.
Corporations feel like they are grasping for this next generation but can’t articulate it well. It reminds me of CEOs grasping the Information Superhighway instead of the broad colloquial term called the “Internet” today, which feels pretty apt. The term used then feels like how we grasp the AI opportunity today, and it’s rather poorly. There were articles about laying fiber optic cable as early as 1983, and this term would eventually morph into “The Internet.”
But if you read books at the time, the Information Superhighway was often people porting the current vision of more TV channels (the platform of the time) and interacting with, of course, the TV to purchase, interact, and use the internet.
Now, since we live in the outcome, we know that isn’t how it turned out. And sometimes, that’s how I feel like we are trying to pursue AI today; we compare the internet to it, but in reality, the eventual usage will be slightly different than we think. Thinking about the Internet versus AI might be like thinking about TV versus the Internet. The actualized outcome will be very different than we thought, much like how the Information Superhighway via your TV never materialized.
The internet ended up being the only winner of the three network worlds below, and the biggest and newest eats the previous versions.
Who knows who will “win”? Like the information superhighway, it's clear that we are grasping at straws, but we can see the potential of AI. But we are likely a long way yet from real ROI.
Now, let’s come back to today—the recent AI survey by the Census Bureau pretty much nailed my estimate that about 4% of people use AI today. It is still early days, and more often than not, AI feels like a solution to a problem. We are just starting to understand how to use it.
And frankly, you could say the same for the internet in the 1990s! You could order things online, but more often than not, just calling your local pizza place would be much faster than trying to order it online, like PizzaNet in 1994. But eventually, the internet did catch up, and now Domino's generates more than 85% of its retail sales via digital channels. It just took 20 years to get there.
I think AI is in the same era of adoption. It’s very early; no one knows the exact impact but can intuitively feel it should be massive, and it’s very clear that the big players don’t want to miss the boat. They will likely overspend to ensure and minimize losses rather than underspend. So, while we are in the era of believing it has value, I don’t think we can quite see the value, nor have the widespread usage that happened then.
Besides the “Where are We” dialogue, the one thing that is starting to become interesting is that I believe we are starting to see an important counterpoint to the prevailing AI theme and critics. And it’s no surprise there’s never been a significant change without some skepticism.
Criticism of AI Spending and Why Betting Against Technology Sucks
We are starting to see something that we’ve notably been lacking: critics. Now, I will leave the environmental concerns aside (which are valid as hell; go nuclear!) but will primarily focus on the ROI side of this. I will first start with Sequoia, which recently published this piece on the $600 billion question, asking where the payback will come from.
Now, the first problem is that I think there are some bad assumptions, namely that AI requires a 50% gross profit. I also have to highlight the fact that he posted that there was a 200 billion-dollar question less than a year before. I think the ROI will fail to materialize in a few years but rather over decades and a giant capital cycle between.
I can predict that another one will come, likely with a much larger number: 1 trillion dollars. But remember, I think all of that makes sense if you’re thinking from a raw ROI perspective, but if you’re thinking about the dollar auction and the fact that Mark Zuckerberg is more than willing to say bubbles create valuable things over time, the irrationality of humans will likely massively outweigh logic. While I think there will be a huge capital formation bubble in between, I think that simplifying pointing at big numbers and saying it doesn’t make sense will not fly.
To put it succinctly, I agree with the ROI issue, but poor near-term ROI has not stopped the irrationality of humans from making large things anyway. The internet and fiber are likely a huge boon to society, just a poor investment. Additionally, when it comes to big societal changes in technology shifts, you can argue that appetite has always been strong.
I mean, look at defense spending and NASA during the Cold War. Can you argue it was a positive ROI except for the defense companies? We’ve spent 4% of our national budget on NASA alone during the space race. While that led to great scientific advances, I don’t think you could argue that much ROI was made. The nation spent that much because there was a fear of security and not keeping up, and I think you could make the same fear happening already in the AI race with China.
Betting against the human desire to do something is foolish, especially for those who desire to raise the machine god. Many AI labs do not care and are zealots as to what they are doing. AGI will forgive all faults, and FOMO to the zealots keeps many stragglers in the game because the risk of not doing so is astronomical.
What's more, if you look at the history of predictions, on the other hand, I have so many examples of new technology past a certain adoption point having haters. The fact that AI has haters is pretty great, and when people asked why we should have PCs at all in 1982, that turned out wrong.
During the automotive revolution, people thought that horseless carriages were fads.
And, of course, the best one yet:
A winner of the Nobel Prize in Economics, Paul Krugman wrote in 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
New technologies attract haters, and in the short term, they are right. Societal change takes a long time. In the long term, betting against progress is a bad game, as technology and human (and machine) ingenuity always seem to find a way.
That said, I don’t think this doesn’t end without tears. I am sure you’ve seen me mention the problem with applying logic to a bubble, and that’s precisely what I think this is and could be. But here’s the thing: if this is a bubble, it probably ends up more significant than the last Telecom bubble at approximately ~$900 billion in spending.
That’s right—AI spending will be much more significant. And now I think I see the path. We are on a narrow path forward, but I see a meaningful catalyst that could and should change the nature of the AI race.
Given some history and internet analogies, my thoughts are behind the paywall. (Sorry!)