Vibe Check: MSFT & META Earnings: LRCX, MXL, WOLF, ASML, CLS, STM, ALGM, INTC
Vibes are still good at the hyperscalers. Less so in Automotive and industrial.
Let’s start with the vibe check. In my last piece on Jevon, I was pretty explicit. I’m primarily worried about how price affects sentiment, so let’s get a vibe check from two of the largest purchasers of GPUs in the world: Microsoft and Meta.
Microsoft
Many considered Satya the weakest link in the AI capex spending camp. Satya has been reversing Microsoft and OAI commitments and was notably absent from the huge Stargate announcement. But does that mean that Satya is not spending? Nope.
Let’s review comments, capex, and Microsoft's position in the AI race. When asked about DeepSeek, they commented that it fits into the current cost reduction curve they see.
It's always about bending the curve and then putting more points up the curve. So there's Moore's Law that's working and hyper drive. Then on top of that, there is the AI scaling laws, both the pretraining and the inference time compute that compound, and that's all software. You should think of what I said in my remarks, which we have observed for a while, which is 10x on improvements per cycle just because of all the software optimizations on inference. And so that's what you see.
That is not to say DeepSeek does not have real innovation.
And add to that, I think DeepSeek has had some real innovations. And that is some of the things that even OpenAI found in '01. And so we are going to -- obviously, now that all gets commoditized and it's going to get broadly used. And the big beneficiaries of any software cycle like that is the customers, right? Because at the end of the day, if you think about it, right, what was the big lesson learned from client server to cloud? More people bought servers, except it was called cloud.
Last but not least, it’s clear that Satya is a Jevons paradox fan/
And so when token prices fall, inference computing prices fall, that means people can consume more, and there will be more apps written. And it's interesting to see that when I referenced these models that are pretty powerful, it's unimaginable to think that here we are in sort of beginning of '25, where on the PC, you can run a model that required pretty massive cloud infrastructure.
Now let’s turn to the actual results. AI revenue was extremely strong, but the implication is that the normal business was a bit weaker than expected. AI revenue is growing at ~175% Y/Y, but this implies the rest of the business is growing barely double digits. They still expect to be AI capacity-constrained in Q3.
Thank you, Brett. This quarter, we saw continued strength in Microsoft Cloud, which surpassed $40 billion in revenue for the first time, up 21% year-over-year. Enterprises are beginning to move from proof of concepts to enterprise-wide deployments to unlock the full ROI of AI. And our AI business has now surpassed an annual revenue run rate of $13 billion, up 175% year-over-year.
And while we expect to be AI capacity constrained in Q3, by the end of FY '25, we should be roughly in line with near-term demand given our significant capital investments.
Now, the other interesting line item was the conversation around datacenter capacity. They noted that they have doubled capacity in the last 3 years and
Azure is the infrastructure layer for AI. We continue to expand our data center capacity in line with both near-term and long-term demand signals. We have more than doubled our overall data center capacity in the last 3 years, and we have added more capacity last year than any other year in our history.
The shift from infrastructure to chips is much more interesting (and indicative of datacenter power names). Notice that long-lived assets (data centers) will receive less funding this year, and shorter-term revenue-generating assets will be the focus for the following year.
Next, capital expenditures. We expect quarterly spend in Q3 and Q4 to remain at similar levels as our Q2 spend. In FY '26, we expect to continue investing against strong demand signals, including customer contracted backlog we need to deliver against across the entirety of our Microsoft Cloud. However, the growth rate will be lower than FY '25 and the mix of spend will begin to shift back to short-lived assets, which are more correlated to revenue growth. As a reminder, our long-lived infrastructure investments are fungible, enabling us to remain agile as we meet customer demand globally across our Microsoft Cloud, including AI workloads.
This suggests that Nvidia's spending will be strong in FY26 and up year over year, while datacenter spending will slow down. This helps derisk Nvidia’s longer-term issues, which are growth in calendar year 26.
All is not bad in the vibes of Satya. He’s still spending.
Meta
Meanwhile, Meta’s tone is much more bullish. Zuckerberg feels the AI, and he has put out some very bullish anecdotes. But before we begin, the significant change was that Meta moved server lives from 4 years to 5.5 years. That makes the opex estimates not comparable to the past, but now in line with other hyperscalers
Our expectation going forward is that we'll be able to use both our non-AI and AI servers for a longer period of time before replacing them, which we estimate will be approximately 5.5 years.
And that’s one of the big reasons for the actual near-term EPS beat.
Meta Platforms reports Q4 EPS $8.02 vs FactSet $6.76
Reports Q4:
Revenue $48.39B vs FactSet $46.99B
Q1 Guidance:
Revenue $39.5-41.8B vs FactSet $41.68B
FY Guidance (Dec 2025):
Capital expenditures $60-65B vs FactSet $52.60B
We expect capital expenditures growth in 2025 will be driven by increased investment to support both our generative AI efforts and core business. The majority of our capital expenditures in 2025 will continue to be directed to our core business.
This chart is another way to visualize the year of efficiency. Costs have been a one-way street, and they’re lower now. The accounting change impacts that, but this is a beautiful story of cost improvements.
And where are some of these cost improvements coming from? It seems like this is what Zuckerberg is most bulled up on.
I also expect that 2025 will be the year when it becomes possible to build an AI engineering agent that has coding and problem-solving abilities of around a good mid-level engineer. And this is going to be a profound milestone and, potentially, one of the most important innovations in history like as well as over time. Potentially, a very large market. Whichever company builds this first, I think, is going to have a meaningful advantage in deploying it to advance their AI research and shape the field. So that's another reason why I think that this year is going to set the course for the future.
But unlike some of the past pushes into reality labs, this push into AI infrastructure seems to be more revenue-capturing. The majority of the spending is directed at the company's core business.
And then in terms of the breakdown for core versus gen AI use cases, we're expecting total infrastructure spend within each of gen AI, non-AI and core AI to increase in '25 with the majority of our CapEx directed to our core business with some caveat that, that is -- that's not easy to measure perfectly, as the data centers we're building can support AI or non-AI workloads and the GPU-based servers we procure for gen AI can be re-purposed for core AI use cases and so on and so forth.
Mark also increasingly talked about ASICs on the call, and it’s kicked off the more considerable divergences. Meta spoke about their success with the MTIA custom silicon accelerator.
Finally, we're pursuing cost efficiencies by deploying our custom MTIA silicon in areas where we can achieve a lower cost of compute by optimizing the chip to our unique workloads. In 2024, we started deploying MTIA to our ranking and recommendation inference workloads for ads and organic content. We expect to further ramp adoption of MTIA for these use cases throughout 2025, before extending our custom silicon efforts to training workloads for ranking and recommendations next year.
MTIA is mainly being used for recommendation workloads.
Right now, the in-house MTIA program is focused on supporting our core ranking and recommendation inference workloads. We started adopting MTIA in the first half of 2024 for core ranking and recommendations inference.
We'll continue ramping adoption for those workloads over the course of 2025 as we use it for both incremental capacity and to replace some GPU-based servers when they reach the end of their useful lives. Next year, we're hoping to expand MTIA to support some of our core AI training workloads and over time, some of our gen AI use cases.
This is one of the more “all-in” comments on ASICs over GPUs. Stocks have been paying attention, and along with the DeepSeek commentary, Nvidia has become a huge laggard in ASICs. I think Meta’s comment is what really kicked this off.
But it’s not exactly like they aren’t going to spend. When asked about capex spending desires, Mark specifically says this is super strategic.
And I continue to think that investing very heavily in CapEx and infra is going to be a strategic advantage over time. It's possible that we'll learn otherwise at some point, but I just think it's way too early to call that. And at this point, I would bet that the ability to build out that kind of infrastructure is going to be a major advantage for both the quality of the service and being able to serve the scale that we want to.
Capex is going to grow 60% Y/Y and across almost all categories. There are two specific interesting pieces of information there. First, general non-AI servers will have a refresh, and next, they will be spending on fiber.
I'm happy to add a little more color about our 2025 CapEx plans, to your second question. So we certainly expect that 2025 CapEx is going to grow across all 3 of those components you described. Servers will be the biggest growth driver. That remains the largest portion of our overall CapEx budget. We expect both growth in AI capacity as we support our gen AI efforts and continue to invest meaningfully in core AI. But we are also expecting growth in non-AI capacity as we invest in the core business, including to support a higher base of engagement and to refresh our existing servers.
On the networking side, we expect networking spend to grow in '25 as we build higher-capacity networks to accommodate the growth in non-AI and core AI-related traffic, along with our large gen AI training clusters. We're also investing in fiber to handle future cross-region training traffic.
Mark thinks that distribution will matter to this generation. And I don’t think he will let this opportunity pass him by. At SIGGRAPH 2024, when he said that Apple gatekeeping his platform was one of the reasons why he will not let that happen next generation, I can’t help but think about LLAMA.
Zuckerberg will try to be the Apple of the AI generation, and with over 700 million monthly active and 3.35 daily active users, I think he has a shot. He understands the distribution rules if the product is commoditized and is doing his best to make model weights commoditized with LLAMA. Then all the value will accrue to the OS on top, and Mark hopes it will be Meta’s family of apps.
Anyway, for the rest of the earnings coverage I usually do, I will discuss them behind the paywall.