This transcript is lightly edited for readability. This is not paid, just an educational conversation. For deeper analysis, refer to the SemiAnalysis Networking model, “The New Networks” or Multi-Datacenter Training for more in-depth analysis.
Doug: Thank you, Will, for being here. We’re going to have a conversation with Will Eatherton of Cisco. I’ll dive into your background, talk about Cisco the historical networking giant, and then Cisco’s AI opportunity today. First, Will — tell me about yourself.
Will Eatherton: Great, Doug. Thanks for having me on. I’ve enjoyed browsing the site and its articles, so I’m happy to be here. I run Cisco networking engineering, focused on hyperscale engagements, datacenters, and service providers, which overlap a lot. I’ve been in the networking industry for about 25 years and have been acquired into Cisco twice.
The first was around 2000, when I was at a startup working on multi-terabit ASICs — the first big wave when everyone thought they’d need multi-chassis switches to move terabits around. That work went into Cisco’s carrier switch. I was at Cisco for ten years and later was a VP at Juniper. The second startup was acquired by Cisco in 2018. The last seven years around AI have been an exciting ride. Most recently, I’m proud of the NVIDIA engagement; I’ve been the point person on the negotiations and integration, which hasn’t always been easy between Cisco and NVIDIA.
Doug: Yeah, especially considering they have Mellanox. Why work together? Cisco was the 800-pound gorilla in the 2000s, and over the last decade-plus merchant silicon and other providers entered the space while Cisco leaned on legacy assets. Now with hyperscalers and AI, Cisco’s opportunity is open again. How is Cisco entering the AI datacenter space?
Will Eatherton: To touch on history, there were glory days in the 2000s and 2010s — the telco and enterprise build-outs were largely Cisco. I left in 2010. The early cloud era was rough for Cisco; we acknowledged some arrogance and we missed the early cloud phase. I returned in 2018, and over the last several years we’ve gone all-in on hyperscale engagements. That dovetails with work on SONiC, optics, and the Acacia acquisition — a corrective pivot for a large company. We may not be as visible as we want yet, but we’re positioned to have impact and we’re taking moves — like the NVIDIA partnership — to address a broader customer base.
Doug: You have real revenue — a billion in AI revenue and guidance that order volume could double next year. Walk me through the prongs: hyperscaler work, NVIDIA, Cisco One. What are the opportunities today?
Will Eatherton: We set a $1B target for FY25; Chuck announced we exceeded it, and we’re guiding higher. FY25 (ended in August) was driven largely by hyperscale AI infrastructure. We explicitly separated traditional data-center revenue from that number.
The key components were GPU-to-GPU interconnects where our boxes were deployed. Fixed boxes are the primary focus across the industry, but hyperscalers also want variation — modular chassis for spines that increase port counts and can reduce network levels.
High radix can be achieved in two ways: at the chip level (higher-rate ASICs) and at the system level with modular boxes. We early on offered a high-rate product; today vendors (NVIDIA, Broadcom, Cisco) are converging around similar radices — roughly 512 up toward 1,000. We and our competitors also support modular systems whose network OS lets you treat all line cards as a single system rather than many fixed boxes. The critical part is the software that presents that single-system abstraction.
Building these modular systems is getting harder as certified speeds increase and internal interconnect complexity grows — similar to server scale-up challenges. But comparable port counts across vendors allow customers to simplify their networks by gaining large radix in modular boxes.
Doug: Cisco is historically known for its network OS. Hyperscalers moved off vendor OSes in the late 2010s (whitebox/bluebox + SONiC). As new silicon like Tomahawk-6 and drivers arrive, vendor OS layers persist. How do you compete long-term when hyperscalers think they can build their own software?
Will Eatherton: The network OS builds on three blocks: the silicon in the box, the optics, and the system design. On the OS side:
We have multiple mature offerings. NX-OS (Nexus) is our traditional data-center OS with rich L2 features and deployment at scale. IOS-XR is our carrier/telco OS; when you move beyond leaf/spine into DCI and WAN that connect users to GPUs across sites, you need richer routing and features — that’s where IOS-XR is used.
Wide-area DCI architectures — enabling workload migration and GPU connectivity across datacenters — are increasingly important. Cisco participated in an OCP presentation on multi-data-center designs, internet pairing, and routing. Technologies like segment routing and IPv6 help manage WAN scale and complexity.
On the front end, some customers with very large GPU fleets (100k+ GPUs) want NX-style features at the front while keeping the same silicon (e.g., 150T silicon) for backend roles — that creates heterogeneous, proprietary mixes.
For whitebox/bluebox approaches — SONiC, FBOSS, etc. — we’ve taken them seriously. We invested in interoperability and support rather than ignoring the trend. After Microsoft and NVIDIA, Cisco is one of the largest contributors to SONiC — we contribute a significant portion of the code. We’re working on adding modular support to SONiC so those capabilities can be generally available.
Doug: That’s basically the opposite of what Cisco historically stood for — cutting off your arms, right?
Will Eatherton: Historically, yes. But infrastructures are upgrading every 12–18 months and hyperscalers move fast. We support SONiC and have gone all-in. For example, with our Silicon One family — the P200 we recently announced — we use an internal “NAS kit” approach: on day one we bring silicon up across IOS-XR (telco/WAN), NX-OS (datacenter), FV-OS (partner environments), and SONiC (customers like Microsoft and Google). Engineering supports all those stacks.
That approach helps us win hyperscale business and lets us add differentiating technology into SONiC. With 25 years of modular experience, we’re applying that know-how to SONiC. The value proposition has shifted — customers expect silicon, optics, software integration, and manageability. The CLI is no longer the primary control point; customers want richer software integration and manageability options.
Doug: For listeners who don’t know: what’s the difference between modular and fixed?
Will Eatherton: A fixed box is a single switch ASIC where all interfaces go to external ports. A modular system uses fabric ASICs and separates line cards from fabric — that’s been around for decades. You have line cards with local CPUs and software for management, plus route processors that coordinate across line cards, forming a hierarchical software system.
Some vendors implement modular designs by running independent network OS instances per chip, which forces you to manage many independent switches — that’s ugly. The modular approach lets you present the entire box as one large switch from software, routing, manageability, and telemetry perspectives. That’s far more scalable for large customers.
Doug: Perfect. Let’s circle back to DCI/WAN and the P200. Telecom still dominates DCI; how does Cisco plan to play there versus Acacia/Sienna in long-haul optics?
Will Eatherton: Let’s start with optics. We acquired Acacia in 2019. That foresight is similar in spirit (if not magnitude) to NVIDIA’s Mellanox purchase. The deal took time and involved financial renegotiation; inside Cisco it quickly became clear the IP was worth it.
Since then, pluggables and routed optical networking have grown. Our routed-optical approach minimizes transponders and OTN, and is Acacia-centric. We’ve integrated that into IOS-XR for better OS support and manageability. Year-over-year growth in optics has been strong.
A non-trivial fraction of AI budgets goes into optics. Hyperscalers ask about supply capacity and quarterly maximums. We’ve focused on 400G and 800G pluggables and are looking ahead to CPO. Optics is a base layer of spend.
On DCI beyond GPU-to-GPU interconnect inside the datacenter, we’ve worked for years with large networks (Google, etc.) using IOS-XR. Cisco’s strategy is to combine optics, silicon (like the P200), and OS capabilities to address DCI/WAN via end-to-end integration and manageability rather than only discrete boxes.
Doug: You mentioned XGS — is that compensating for not having deep buffers? Are there partnership opportunities?
Will Eatherton: XGS can use small buffer chips and NIX-to-NIX algorithms to help compensate for shallower buffers. We’re exploring partnership opportunities to incorporate XGS algorithms. But beyond XGS, the problem has multiple parts: core bandwidth interconnect, flexible load balancing and failure management, and larger routing/control tables. As systems scale, larger routing tables and control features become important.
We’ve been pushing segment routing and IPv6 — technologies Cisco pioneered. For stacks like SONiC, which have a minimal routing protocol set, segment routing offers traffic engineering without requiring a full set of complex routing protocols. The P200 is designed for deep buffers and the ability to use them, plus more features and bigger tables to support WAN/DCI use cases — that will be a key component.
Doug: Thank you.
Will Eatherton: WAN is an engagement point. It’s a lot of work — often five times the effort for a smaller revenue share — but it’s sticky. Deploying WAN/DCI builds trust with large customers. Over the last five to seven years we’ve focused on moving fast, supporting custom work, and shedding the “arrogant Cisco” image.
Diversity of options is another point. Silicon One gives a common architecture we can leverage: G-series for spines, other variants for storage and L2 roles, and the P200. That family of chips offers customers choice — important when Broadcom has been the hyperscaler play for years.
We signed an extended multi-year agreement with NVIDIA so our silicon can interoperate in the NVIDIA ecosystem, enabling customers not to require completely different architectures for different vendors. SpectrumX licensing lets our silicon interoperate with NVIDIA NIX and support adaptive routing with fabric telemetry — whether the fabric uses Broadcom, Cisco silicon, or NVIDIA — enabling end-to-end architectures and supply-chain flexibility.
Doug: Why did NVIDIA partner with Cisco?
Will Eatherton: Initially, enterprise was the focus — bringing joint architectures into enterprise deployments. Over time it extended to NeoClouds. NVIDIA’s Cloud Partner Program is prescriptive and regiments configuration, and some NeoClouds want our network OS on top. We’ve had customers ask whether Cumulus could run on Cisco silicon for optionality; I’m exploring that. It gets messy because of cross-product integrations, but the underlying theme is choice and a common architecture across vendors.
Doug: Ethernet protocol wars — UEC and UA-Link for scale-up. Where does Cisco stand?
Will Eatherton: Much of NVIDIA’s stack is proprietary and runs parallel to standards. We fully support UEC and were early to do so, but I have reservations. There have been false starts and shifting definitions — for example, how big in-network compute should be, and whether features are scale-up versus scale-out. We back standards work and participate in OCP, UA-Link, and UEC, but adoption and real impact will likely take longer than many expect.
History shows these transitions take years. There’s time for standard maturation; we’ll participate, but broad impact is a multiyear story.
Doug: Anything people should pay more attention to in networking?
Will Eatherton: Manageability. Hyperscalers do their own thing; sovereign clouds follow prescriptive architectures and fear deviation; high-end enterprises want manageable systems without endless bespoke engineering. Manageability in AI infrastructure is underappreciated but will become crucial.
We’ve invested heavily in controllers and management consoles: Nexus Dashboard and a new product launching soon target cluster-level management — GPU-to-GPU meshes, latency visibility, debugging, and wiring. A 100-GPU cluster can have a thousand fiber connections. We’re building “hyper fabric AI,” a cluster management view for front-end, back-end, management, and storage networks — a single console that pulls together networking and cluster telemetry. That’s a route into sovereign clouds and high-end enterprise.
Doug: So the entry point is manageability — not just being the No. 3 supplier, but using enterprise relationships to win medium-sized GPU clusters. That’s the strategy and the hope?
Will Eatherton: Yes. We add value there. Ideally GPU capacity would be more distributed, not concentrated. I just finished a customer advisory board with about 100 top enterprise customers; they don’t want to manage high-performance compute themselves. They want simplified management. We’re focused on making the experience better and simpler for customers running clusters from ~100 GPUs up to thousands. We see hundreds of millions of spend across compute and networking in the high-end enterprise market, and we believe we can lead there.
Doug: Anything else? Thanks for joining — I learned a lot. I pushed on Cisco, but I want to see more competition.
Will Eatherton: That’s our goal. We want Cisco to be strong again. For people who worked with Cisco 15–20 years ago, it’s a different company now; we’re moving faster.
Will Eatherton (cont’d): This space keeps changing every few months. Scale-out networking and scale-up approaches will evolve. We’re working with partners on transitions — fighting today’s battles while planning the next phase, which will look different from the last 20–30 years. It’s an exciting time.
Doug: Networking may feel like it’s becoming more integrated — more like PCB traces or an integrated rack — which will make serviceability harder. What about CPO? Peers are pushing first-gen CPO chips; that will change serviceability and be a big shift. How is Cisco positioning for CPO?
Will Eatherton: We believe CPO (co-packaged optics) will drive a major shift. We have announcements coming, and we see this as a move from a chip play to a systems play — that’s Cisco’s wheelhouse. Serviceability, software integration, and failure handling are things we can integrate across our network OSes and system architecture. We’ll continue to evaluate the NVIDIA partnership and decide how to engage; we’ll have multiple approaches. For example, a Spectrum silicon switch might start with an NVIDIA CPU on the switch and later move to a Cisco CPU — and we’ll continue to offer our own silicon. The goal is to provide options and support customers where they want us.
Doug: Silicon photonics has always been “five years away.” Is it nearer now?
Will Eatherton: I wouldn’t be surprised by six- to twelve-month hiccups, but I think we’re under five years now. It’s finally getting close.
Doug: Agreed. Thanks so much for your time, Will. Any final parting words?
Will Eatherton: No, that’s good. Thanks for having me.
Doug: Great. Thanks, Will. Take care.
Will Eatherton: Bye.








