Nvidia's China Problems, Applied Materials, and Microsoft's Accelerators
Nvidia surprised everyone with a boring result! Applied Materials is an overreaction.
Aiera sponsors this week’s post. Aiera is a real-time transcription and event service, and their claim to fame is the “Red Zone” of earnings. I get offered sponsored content often but rarely take it up unless I believe in the product. Aiera is one of those products. Ever want to listen to an earnings call and skim what’s going on another call at the same time? Aiera was made for that. Check them out!
Nvidia’s China Quarter
Nvidia’s quarter was surprising to me because it was boring. There were a few incremental pieces, but the big news was everything to do with China. As you know, there was another round of export restrictions with a myopic focus on AI Accelerators.
This impacted results and the outlook.
Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products including our Hopper and Ampere 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe they'll be more than offset by strong growth in other regions.
But here’s the thing: they still handily beat and raised estimates. Shares reacted poorly because this was below buy-side bogeys.
NVIDIA reports Q3 EPS of $4.02 ex-items vs $3.37
Revenue $18.12B vs $16.19B
Revenue $20.00B +/- 2% vs $17.96B
What’s so staggering about this result is that 20% of their revenue is going to 0, and they will still be shipping ~$20 billion in revenue next quarter. Stacy Rasgon asked a question along these lines, and the answer was they would have smashed results if it weren’t for export restrictions.
But with the absence of China, for our outlook for Q4, sure, there could have been some things that we are not supply constrained that we could have sold to China but we no longer can. So could our guidance have been a little higher in our Q4? Yes. We are still working on improving our supply and plan on continuing and growing all throughout next year as well towards that.
There is hope as they expect to ramp up new replacement products that fit the rules sometime next year.
The export controls will have a negative effect on our China business, and we do not have good visibility into the magnitude of that impact even over the long term. We are, though, working to expand our data center product portfolio to possibly offer new regulation-compliant solutions that do not require a license. These products, they may become available in the next coming months. However, we don't expect their contribution to be material or meaningful as a percentage of the revenue in Q4.
Now I have to ask, are all the loopholes closed? Another curious thing about this quarter is how large Singapore has become as a percentage of sales, and this most recent quarter comprised ~14% of sales, up from 9% last year. This could be a loophole with Cloud Computing services in Singapore leased by Chinese entities.
And if there’s been anything in my experience in following Nvidia, they have insane channel issues, and what and who is buying their GPUs has never been Nvidia’s forte.
On that note, I want to mention a troubling sign: Nation-states are stepping into the game. I think this was one of the signals going into the AI craze that I would think it’s late innings. The last and final distribution channel would be nation-states piled in with FOMO. I would not be surprised by the India and France GPU clouds being almost worthless in a few years. These are GPUs looking for problems to solve, and there is little, if not no LLM knowledge in these countries. This is how to overbuild 101.
Many countries are awakening to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystems. For example, we are working with India's government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI Enterprise software to fuel advancement across France and Europe. National investment in compute capacity is a new economic imperative, and serving the sovereign AI infrastructure market represents a multibillion-dollar opportunity over the next few years.
Last but not least, I wanted to talk about something they spent a lot of time talking about on the earnings call: software improvements. TensorRT-LLM doubles inference performance, and that is without changing the hardware stack at all. That’s part of the power of the Nvidia moat, a large install base, and multiple vectors of improvement, all part of the bundle you pay to Nvidia. For the end customer, you continue to get improvements even after you purchase your GPU. Few other silicon companies have that scale.
The math and implied deflation of GPU power is insane. H200 and TensorRT LLM increase performance by 4x in 1 year without major hardware changes. These are intra-generation improvements; we should expect this kind of improvement in the B100 generation next year.
We also announced the latest member of the Hopper family, the H200, which will be the first GPU to offer HBM3e, faster, larger memory to further accelerate generative AI and LLMs. It moves inference speed up to another 2x compared to H100 GPUs for running LLMs like Llama 2. Combined, TensorRT LLM and H200 increased performance or reduced cost by 4x in just 1 year without customers changing their stack. This is a benefit of CUDA and our architecture compatibility. Compared to the [ A100 ], H200 delivers an 18x performance increase for infancy models like GPT-3, allowing customers to move to larger models and with no increase in latency. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200-based instances starting next year.
I brought this up in my recent telecom history piece; I noted that while demand was strong, so were supply additions during that period. This software improvement is similar to what happened back then.
This is an example of sneaky supply additions, and whenever supply eventually outpaces demand, the market will cross from a shortage to a glut. The LLM craze is going to be a long-winded investment cycle, but it just takes a tiny bump in end demand in addition to the supply catch ups to throw the market into a meaningful glut. Investors should understand this dynamic.
That’s it for my free piece. I post my earnings overview for Nvidia or TSMC each quarter for free. If you enjoy this analysis - please consider subscribing. I have takes on Applied Materials behind the paywall and soon a few ideas on the entire space.