Avatar

The traditional answer to dealing with heat in data centers has been to use air-cooled fans to cool the equipment. But air cooling a modern AI rack is like blowing on a hot pan—it works slowly and inefficiently. The industry was ripe for change.

I’ve always believed that the most significant breakthroughs happen when you combine the agility of a startup with the scale and reliability of a global leader like Cisco. We gathered a small team to move ahead of the curve, built a prototype, and put it in front of customers around the world. With direct feedback from the industry and our customers over the last three years, we are ultimately working to bring a 100% direct-liquid-cooled network switch to market.

The heat behind the hype

With the explosion of generative AI, the International Energy Agency projects that electricity demand from data centers worldwide is set to more than double between 2024 and 2030 to around 945 terawatt hours (TWh), slightly more than the entire electricity consumption of Japan today. Today’s traditional enterprise rack draws 5 to 15 kW. An AI GPU rack can draw 60 to 130 kW, and it’s projected to draw up to 1 MW each by 2030, a scale that was once used for entire facilities.

At this scale, AI clusters can only be cooled with liquid cooling. Just like a car engine that circulates coolant to carry heat away from the engine, liquid cooling circulates a water-glycol mix through pipes to efficiently dissipate heat directly from high-density, high-performance networking components. This delivers more performance per rack, while using less power to cool it.

Built like a startup

In 2022, a group of Cisco engineers—Senior Director of Data Center Architecture Christopher Liljenstolpe, Director of Hardware Engineering Vic Chia, and Senior Engineering Product Manager Asha Hegde—set out to build a prototype of a direct-to-chip liquid-cooled version of the Cisco 51.2-terabit switch.

For a team operating with a startup mentality inside a company the size of Cisco, we faced the classic chicken-and-egg dilemma: the business wanted to see the demand before investing; customers needed a viable product before formally expressing demand.

Our strategy was to gather proof from the industry and from customers. We collaborated with partners like the Open Compute Project (OCP) and the Linux Foundation to help define the path forward for liquid-cooled infrastructure, and we debuted a prototype at the Optical Fiber Conference in March 2023. At the largest event for optical communications with over 15,000 attendees, no one had shown anything like it. Customers immediately began asking, “When will this be available?” That confirmed we were heading in the right direction.

The team showcased the prototype at other industry conferences over the following months, building momentum with each showing. The prototype unlocked real customer conversations with AI hyperscalers, neoclouds, and service providers. “GPU servers had already moved to liquid cooling, but the network switch has been sitting in the same hot, dense rack and still relying on air,” Christopher shared. “As the leader in networking, we were able to help customers think about cooling their entire infrastructure and have conversations that weren’t happening anywhere else.”

Director of Hardware Engineering Vic Chia and a cross-functional team showcased the direct-to-chip liquid-cooled version of the Cisco 51.2T switch at industry conferences

Our hardest engineering challenge was cooling the front-end optics. The prototype’s 800G OSFP transceivers generate enormous heat in a small space, and the optics are designed to be swapped in and out. We needed to maintain a tight thermal connection between the optic and the cold plate. We pioneered a 2×8 optics cooling design that solves this challenge, and it has helped shape how the broader industry approaches optics cooling today.

From prototype to product

Our prototype and stack of evidence made it easy for decision makers at Cisco to commit to productize an even faster switch. “You need proof that a new bet is worth it,” said Asha. “The customer response was so strong that it was an easy decision for leadership to green-light production.”

While we were the first to show what was possible, we knew other companies weren’t too far behind. Delaying this product offering would result in Cisco losing any first-mover advantages.

Direct-liquid-cooled network switch prototype by Cisco
Direct-liquid-cooled network switch prototype by Cisco

In February 2026, we announced the next generation of Cisco N9000 and Cisco 8000 systems with liquid-cooled designs. Powered by the Cisco Silicon One G300 chip, the system delivers 102.4 terabits per second of throughput, doubling the prototype’s capacity in the same physical footprint. This enables significantly higher bandwidth density and a nearly 70% energy improvement, offering the same bandwidth in a single system that would previously have required six prior generation systems.

We’re focusing on increasing energy efficiency, lowering operating costs, and simplifying operations as the AI ecosystem buildout expands beyond hyperscalers. “Our liquid cooling isn’t bolted on,” says Vic. “The silicon, optics, and cooling are designed as one system, so operators can build this into their data centers from day one.”

 

No one scales alone

The startup mentality isn’t just about building fast and getting to the market first. It’s about knowing what to build and knowing when to bring in partners who are leading their industries.

We created the Cisco Engineering Alliances program to scale our ability to engineer and validate new solutions. Our Alliance enables hardware, software, and services to reduce integration risk and accelerate time to deployment for building AI infrastructures at speed.

These partnerships are especially critical in an evolving regulatory landscape that is focused on waste heat capture. Germany’s Energy Efficiency Act (EnEfG) and broader European regulations will require certain data centers to capture waste heat and return it to municipal heating systems. When heat is captured in fluid and transferred to a heat exchanger rather than expelled as hot air, we turn waste into a resource.

Innovation doesn’t ship and then stop

True innovation is a constant state of looking around the corner, and it can come from anywhere inside a company this size. We build towards where the market is heading, listen to customers’ challenges, and scale an ecosystem that our customers can immediately trust. As the pan keeps getting hotter, we are already moving beyond the switch, exploring immersion cooling and extending liquid-cooling architectures to storage and power supplies.

At Cisco, we aren’t just building for today’s AI demands; we are building the foundation for the next decade of infrastructure. We listen, we prototype, we partner, and we scale. That is how we lead in the AI era.

Visit our direct liquid cooling demo at Cisco Live 2026 in Las Vegas to learn how to achieve energy savings in the AI era

Authors

Denise Lee

Vice President

Engineering Sustainability Office