NVLink Fusion: Embrace, Extend, Extinguish
When you're the only fabric that's working, might as well extend the moat.
I wrote about the three-headed hydra of Nvidia a year or two ago. Hardware, CUDA, and networking. At the time, the entire investment community’s focus was on the unbeatable CUDA moat, and I explained that Networking, too, was underrated.
Two years later, and it’s clear that Networking is a first-class citizen in the Nvidia ecosystem. The clear generational innovation is the NVLink backplane, which enables chips to communicate with each other coherently and is the single most significant advantage that Nvidia has to make GPUs as large as data centers.
UALink, an open spec backed by AMD, Intel, Broadcom, Google, and others, wants to neutralize this edge. Yet consortia move at the speed of committee politics. Draft 1.0 targets came out just this year. Nvidia is already shipping the next generation.
Nvidia shook up the competition by announcing that it will license C2C and sell chiplets for NVLink. Both have staggering ramifications.
C2C Licensing and NVLink Chiplets
Nvidia’s keynote dropped twin bombs:
C2C (Chip-to-Chip) licensing – opening the blueprints for its short-reach die-to-die PHY and protocol.
NVLink chiplets – selling pre-verified I/O dies that third parties can tile onto their own silicon.
First, let’s discuss C2C licensing, which primarily focuses on CPU and multi-die accelerator projects. I can imagine this going multiple ways, but the primary product mentioned in the NVLink announcement was CPUs, specifically those from Fujitsu.
Why does that matter? One of the biggest ARM-based clusters in the world is a Fujitsu cluster. One of the significant opportunities is using more GPUs in HPC-style workloads, but with a 1 to 1 ratio similar to the Grace Hopper configuration, more so than the Grace Blackwell configuration. This allows Fujitsu to utilize their custom ARM cores with a GPU tightly linked together, and is a legitimate product catered to HPC.
Next, the C2C is one of the more challenging technologies to implement for chiplet-based accelerators on a board, and by licensing this technology, it helps accelerate GPU and CPU hybrids. This is beneficial for broader GPU adoption, and by licensing the technology, it accelerates the entry of GPUs into areas that are more CPU-intensive, such as HPC. And the most likely GPU in the world is an Nvidia GPU.
But that isn’t all. Below is an image of how the NVLink Fusion topologies could work. One of the most interesting aspects is that NVLink will ultimately become a Trojan horse for all accelerators everywhere. The thing that you’re specifically looking for is a world that has a custom CPU, a custom GPU, and the NVlink fusion chiplet.

Because, while it seems like Nvidia will “lose” in this configuration where they are just selling an I/O chiplet, they are killing the competition.
Nvidia’s Perfect Solution with Lock-In
In this topology, let’s discuss what is and isn’t technically tough to accomplish.
CPU - It's relatively easy to obtain a custom CPU from ARM CSS.
The Accelerator design is challenging to implement, but the front-end design is relatively easy. It can take 100 people and a few EDA tools.
The Networking I/O - This is highly challenging to implement and often serves as the make-or-break factor for the majority of custom silicon projects.
Scale Up Domain - Extremely challenging to achieve, and there is no working example that is not NVLink currently.
So let’s use this logic to consider why Jensen decided to license one technology and then sell chiplets for another.
The C2C, in my estimation, is challenging but not impossible, and so Jensen letting the license go there makes sense. His philosophy historically has been to open-source or let the ecosystem adopt a technology where there is little differentiation. It makes sense that if he thinks the average CPU will adopt a GPU as a co-packaged technology, you might as well license it to potential buyers.
But the chiplet portion for NVLink is much more interesting. It’s damning that Jensen wants to sell chiplets, not license the technology, because this is a technology with differentiation, and it makes sense that Nvidia will want to keep it in-house.
However, the crucial strategic move here is the concept of 'embrace, extend, extinguish,' and in my view, that’s what Nvidia is doing with UALink.
Embrace, Extend, Extinguish
Today, there is only one type of fabric that works for scale-up domains for accelerators, and that’s NVLink. NVLink is critical, and Jensen would never give it away for free. But NVLink also has a competitor on the horizon, UALink. UALink is supposedly going to be a competitive technology to NVLink. Still, it suffers from the crisis of the commons, which occurs when an open specification has many powerful parties in conflict.
Each of the UALink players is trying to insert their best interests into the specification, and the squabbling means that it will likely take time. Additionally, UALink is slated for a 128G launch, with hardware shipping next year. That’s too late in the race for accelerators. Additionally, most of the players in the specification (except for Broadcom) don’t sell the IP directly, but rather bundle it with custom ASIC projects.
Nvidia is still the only working product and is faster than competitors. So, what would happen if they just threw their NVLink chiplets into the fray, let competitors license them if they wanted to, and then, hopefully, in the next generation, they could figure out a non-NVIDIA solution? This is a Trojan horse. Once Nvidia enters the networking market, they will likely embrace their customers, extend their roadmap to be much faster than UALink ever could be, and then slowly extinguish their competitors.
In the war between open (via the OCP and consortia) and closed, proprietary specifications managed by a single company, the former has been slower than the latter, and the world is hungry for any alternative solution. Nvidia will just so happen to license the golden goose, and while you dip a toe into their roadmap and look around, you’ll see that Nvidia’s products and plans are almost always better than the competitor’s. Additionally, Nvidia will have a better view into competitors than it has ever had before if it starts to use its networking IP.
All roads lead to Nvidia. And just by offering a golden screw in the accelerator shortage, the companies that rely on them will realize their keystone IP is being made by the single most technolgically dominant company of our time. And that’s a mistake. But it’s hard - because it’s Nvidia’s solution or none at all, so what choice do they have?
Anyways. Excited for Nvidia earnings tomorrow - hope everyone has a great day, and I’ll talk to you soon.
"The C2C, in my estimation, is challenging but not impossible, and so Jensen letting the license go there makes sense. His philosophy historically has been to open-source or let the ecosystem adopt a technology where there is little differentiation. "
gives off "commoditize your complements" vibes