Nvidia: Better than Dreamed
Think about how good Nvidia's earnings are. It's even better than that. Think about it again. It's better still. It's that good. Jokes aside, here's my take.
Nvidia, in characteristic fashion, murdered EPS estimates. Revenue was $2 billion ahead of the guide, and the guide was another $4 billion ahead of consensus for the next quarter.
NVIDIA reports Q2 EPS $2.70 ex-items vs. FactSet $2.08
Reports Q2:
Revenue $13.51B vs. FactSet $11.19B
Q3 Guidance:
Revenue $16.00B +/- 2% vs FactSet $12.59B
Non-GAAP gross margin 72.5%, +/- 50bp vs consensus 70.6%
Non-GAAP operating expenses $2.00B vs. consensus $2.16B
Going into the print, many people were concerned that positioning was bad, i.e., everyone was bullish and expectations were too high. Well, they did not disappoint. Nvidia numbers were above even the high end of the buy-side expectations for guides next quarter, as listed by this poll from Bank of America.
I am not shocked; I have been writing about this. First, I was sure last quarter that the next quarter would be a beat. I didn’t make it that explicit, but Jensen never misses until he guides you down 40%. We are far from that part of the cycle. We have beats on the horizon.
The biggest “debate” in this calculation is likely my continued sequential revenue growth for the next few quarters, but I think it’s unlikely they cannot grow sequentially given the step up in demand. And remember, this is Nvidia, the company that has missed revenue estimates thrice since 2011.
What’s more, on another note about capex, I expected, at the very least, a 30% QoQ increase in revenue next quarter, given the acceleration of hyperscaler capex. That’s
That’s probably more than enough for a revenue beat at Nvidia. And we should expect a 30% QoQ guide for Q3 from Nvidia, meaningfully above the street’s 10% QoQ guide next quarter. Next quarter should be a blowout based on the increasing number of dollars added. Google might not contribute with their TPUs, but some spending will also be on GPUs.
Now, next quarter’s guide is more like 20% QoQ revenue, but I think, once again, Jensen and the team are sandbagging. The same company also told you they would make $11.5 billion this quarter and overshot that by a measly $2 billion. Expecting that beat in the future means we should get to ~$18 billion. That’s what I expect.
Now, let’s peel apart commentary from the call.
It’s Hard to Know if It is Sustainable
I think the biggest question on investors’ minds, of course, is the unanswerable one. When asked if CSPs and customers could be overordering and there is a demand cliff next year as demand is met, we got the answer everyone fears: that it’s almost impossible to estimate true end-state demand. Jensen, in true Jensen fashion, says there’s a platform shift, and we are in the early days of replacing the 1 trillion dollar server market. Where we are in that shift is unknowable even to Jensen.
And in combination with HP, Dell, and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage generative AI. And so I think the answer to that question is hard to predict exactly what's going to happen quarter-to-quarter. But I think the trend is very, very clear now that we're seeing a platform shift.
Speaking of L40s, L40s are not CoWoS-constrained and are likely part of the incremental beat this quarter. It also seems that TSMC constraints were ramping much faster than investors expected. Two new greenfield CoWoS facilities by TSMC are likely ramping up much faster than investors expected. Collette (CFO of Nvidia) said they are meaningfully growing capacity but did not quantify how much that was.
This quarter, they broke out revenue by customer type, and over 50% of revenue is CSPs, meaning Google, Microsoft, Amazon, CoreWeave, and Oracle Cloud. The next largest category being consumer interest companies, implies Meta.
So thank you, Toshiya, on the question regarding our types of customers that we have in our Data Center business. And we look at it in terms of combining our compute as well as our networking together. Our CSPs, our large CSPs, are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer Internet companies. And then the last piece of that will be our enterprise and high-performance computing.
There’s been a bit of confusion about how much revenue is H100s and HGX versus DGX (direct from Nvidia). DGX also includes some outright software business, which could be some of the incremental revenue beat. DGX comes with a higher margin and some software revenue that is extremely hard to model.
Our DGXs are always a portion of additional systems that we will sell. Those are great opportunities for enterprise customers and many other different types of customers that we're seeing even in our consumer Internet companies.
The importance there is also coming together with software that we sell with our DGXs, but that's a portion of our sales that we're doing. The rest of the GPUs, we have new GPUs coming to market that we talked about, the L40S, and they will add continued growth going forward. But again, the largest driver of our revenue within this last quarter was definitely the HGX systems.
Similar to Apple’s services, I believe that DGX is going to, at some point, become a meaningful driver in the future. As enterprises go directly to Nvidia and use some of their platform and software, this will come as incremental revenue. H100 and other hyperscaler-tuned products are less off the shelf and are components that are made into a larger system. DGX is off the shelf, including some platform-as-a-service revenue, and likely better for enterprises, which today is the vast minority of revenue.
And part of our opening remarks that we made as well, remember, software is a part of almost all of our products, whether they're our Data Center products, GPU systems or any of our products within Gaming and our future Automotive products. You're correct, we're also selling it in a standalone business. And that standalone software continues to grow where we are providing both the software services, upgrades across there as well.
Now we're seeing, at this point, probably hundreds of millions of dollars annually for our software business, and we are looking at NVIDIA AI Enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100. And I think we're going to see more availability even with our CSP marketplaces. So we're off to a great start, and I do believe we'll see this continue to grow going forward.
Hundreds of millions of revenue are not going to drive revenue at all and is a rounding error compared to the ~10 billion in revenue that they are currently making in the datacenter segment.
I will leave you with an extremely long-worded response on why Nvidia has a competitive advantage. Jensen can be wordy, but this is the accumulation of why Nvidia wins.
So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale. So this run time called NVIDIA AI Enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other. And that run time is, as I mentioned, continuously updated and optimized for our installed base, for our stack.
I would say, number 1 is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the things that I just said, from data processing to training to inference, for preprocessing of the data before you do the inference, to the post processing of the data, tokenizing of languages so that you could then train with it. The amount of -- the workflow is much more intense than just training or inference.
The second characteristic of our company is the installed base. You have to ask yourself, why is it that all the software developers come to our platform? And the reason for that is because software developers seek a large installed base so that they can reach the largest number of end users, so that they could build a business or get a return on the investments that they make.
And then the third characteristic is reach. We're in the cloud today, both for public cloud, public-facing cloud because we have so many customers that use it -- so many developers and customers that use our platform. CSPs are delighted to put it up in the cloud. They use it for internal consumption to develop and train and to operate recommender systems or search or data processing engines and whatnot all the way to training and inference.
And so reach is another reason. And because of reach, all of the world's system makers are anxious to put NVIDIA's platform in their systems. And so we have a very broad distribution from all of the world's OEMs and ODMs and so on and so forth because of our reach.
And then lastly, because of our scale and velocity, we were able to sustain this really complex stack of software and hardware, networking and compute and across all of these different usage models and different computing environments. And we're able to do all this while accelerating the velocity of our engineering.
It seems like we're introducing a new architecture every 2 years. Now we're introducing a new architecture, a new product just about every 6 months. And so these properties make it possible for the ecosystem to build their company and their business on top of us. And so those, in combination, makes us special.
Jensen is describing an ecosystem in acceleration. Yes, companies are trying to replace Nvidia’s CUDA stack with Triton and other open-source solutions, but the reality is that Nvidia's momentum and dominance are unmatched. They have the best hardware and software, install base, and scale at both CSPs and ODMs, and continue to update their 4,500 software packages. This results from a decade of work to be in the right place with the right infrastructure, and now it’s Nvidia’s world to harvest.
As I wrote in the piece where I called Nvidia a three-headed hydra, you must beat each part of the ecosystem to create a better solution. Meanwhile, Nvidia’s ecosystem is closed, and most of the value that Nvidia creates is accruing to them.
Last, I want to mention the final statement about a new product every six months. Because there is an important product announcement that Nvidia is pursuing. The DGX GH200 is a giant memory-pooled GPU-CPU hybrid that will be released by the end of the year. Google, Meta, and Microsoft are first in line there, and in my opinion, adopting this product could lead to another acceleration in revenue if budgets permit. Nvidia will be selling more raw silicon content, including a CPU in this configuration.
We announced the DGX GH200, a new class of large-memory AI supercomputer for giant AI language model recommendator systems and data analytics. This is the first use of the new NVIDIA NVMe switch system, enabling all of its 256 Grace Hopper Superchips to work together as 1, a huge jump compared to our prior generation connecting just 8 GPUs over an NVLink. The DGX GH200 systems are expected to be available by the end of the year. Google Cloud, Meta and Microsoft among the first to gain access.
And we should likely expect a new product announcement in the first half of 2024. Hopper-Next allegedly is much more tailored to generative AI than the generalized H100. This product likely creates another gap compared to competitors, and it’s not a question if there is demand for their products, but if they can justify the spend. It’s still pretty much a single company in control: Nvidia. Impressive results. This was better than dreamed.
Read behind the paywall for premium (and thus very valuable) thoughts. I take a stab at EPS and have thoughts on peer AMD.