In an exclusive interview at Cisco Live Amsterdam, Martin Lund unveils the Silicon One G300. According to him, it is a networking chip so advanced that only a few other companies in the world can match its capabilities.
What does it take to build a chip that delivers 10,000 times more networking capacity than devices from just 25 years ago? And why does this chip require liquid cooling to manage the extreme heat it generates? Cisco’s Executive Vice President of the Common Hardware Group discusses some of the technical secrets behind the 3-nanometer G300 chip that’s powering the next generation of AI data centers.
This isn’t just another networking chip announcement, is what Cisco wants to convey. According to the company, the G300 represents a fundamental shift in how AI infrastructure will be built. It is based on a programmable architecture that allows the chip’s behavior to be modified after deployment. That is very important, as AI workloads continue to evolve unpredictably. Lund explains why this matters for organizations deploying everything from 1,000 GPUs to massive AI factories with 100,000+ processors.
What you’ll discover in this interview
The conversation goes deep into the technical challenges of pushing semiconductor manufacturing to its physical limits. Lund reveals why manufacturers literally cannot build chips larger than the G300 with current technology, and what that means for the future of networking silicon. He also explains the architectural differences that set Cisco apart from competitors, including why programmability matters more than raw specifications.
But the discussion extends far beyond a single chip announcement. We also discuss the definitive resolution of the Ethernet versus InfiniBand debate that has divided the AI infrastructure community. Lund explains why InfiniBand’s 65,000-node addressing limit creates an insurmountable problem for hyperscale deployments, and how Nvidia’s own shift toward Ethernet signaled the end of the standards war.
The future of optical networking
At the end of the conversation, we also dive into silicon photonics, to get Lund’s insights into the future of optical networking. He states that co-packaged optics could reduce power consumption by up to 70 percent. That is a huge improvement that could transform data center economics. But what are the reliability challenges holding this technology back? And why might we see practical quantum computing before we see fully optical packet switching?
The interview also explores why distances shrink as speeds increase, the 25-year journey of silicon photonics from promising concept to mass production, and whether laser reliability issues can be solved with hot-swappable optical components. Some of those topics may sound like abstract future possibilities, but they are engineering realities already, and will shape infrastructure decisions towards the future.
Key insights covered
- How Cisco joined the elite tier in advanced networking silicon
- Why programmable architecture extends device lifetime and optimizes AI network performance
- The thermal challenges requiring liquid cooling for cutting-edge networking chips
- Why Ethernet definitively won over InfiniBand for large-scale AI infrastructure
- How five of six major hyperscalers have already adopted Silicon One technology
- The power efficiency potential of co-packaged optics and future photonic integration
- Why copper transmission distances collapse as networking speeds increase
- The organizational transformation that unified Cisco’s silicon development
Whether you’re planning AI infrastructure deployments, evaluating networking architectures, or simply trying to understand where data center technology is headed, this interview provides rare technical depth from one of the few executives with visibility across the entire hardware stack. Watch to understand the engineering realities behind the AI infrastructure revolution.