In the AI era, performance is measured in time: time to first token, time to remediation, and ultimately, time to value. Time is the new currency, but AI only moves as fast as the network it runs on.
So, the question becomes: Is your network accelerating AI, or quietly holding it back? AI workloads are fundamentally different from traditional applications in three ways. First, they are massive data engines. They generate and consume traffic at an intensity that pushes even advanced 800G infrastructure to its limits. Second, they’re power intensive. Data center power consumption is expected to double by 2026 because of AI, demanding infrastructure that delivers more performance per watt. And lastly, they’re highly sensitive to network conditions. Even a moment of latency, congestion, or jitter—often insignificant for traditional applications—can slow or halt AI training jobs, leaving expensive accelerated compute waiting instead of producing outcomes.
That’s why AI networking must deliver both scale and simplicity. Scale keeps data flowing, so GPUs stay productive, and jobs complete faster. Simplicity ensures these complex environments can be operated reliably, with issues identified and resolved quickly before performance degrades.
Today, Cisco is launching a new wave of innovation designed to scale out and scale across for the agentic era. This includes the introduction of the Silicon One G300 ASIC, 102.4T systems, 1.6T optics, P200-powered systems, AgenticOps for data center networking, and Cisco Nexus One.
Cisco Nexus One: Meeting customers where they are in their AI journey
Most organizations are not building AI infrastructure from scratch. They’re working across existing environments—traditional IT, cloud, edge, and AI—often spanning multiple geographies, regulatory regimes, and sovereign domains. The goal: deploy AI workloads without disrupting business. However, ambitions to accelerate AI adoption come with mounting complexity, from siloed infrastructure and operational sprawl to new security threats across the AI lifecycle.
This is where Cisco Nexus One comes in. Designed to adapt to customers’ unique needs and stages of AI transformation, Nexus One provides flexibility and interoperability across the stack, while maintaining operational consistency across the network.
This comprehensive foundation spans silicon, systems, optics, software, and operating models—with recent advancements driven by significant investments at every layer.
Scale-out networking: Nexus infrastructure powered by Silicon One G300
At the heart of Nexus One is Cisco’s highly programmable Silicon One ASIC, purpose-built for AI. Designed to evolve with shifting AI workloads and dynamic environments, it supports a wide range of network roles and use cases.
With the introduction of Cisco Silicon One G300—the industry’s most advanced 102.4 Tbps scale-out switching ASIC—we enable distributed AI workloads and backend fabrics for massive clusters, unlocking new levels of intelligence.
AI traffic often overwhelms traditional switches with synchronized microbursts, but the G300’s industry-leading fully shared packet buffer handles surges without packet loss, delivering up to 2.5x increased burst absorption. Furthermore, simulations show a 28% reduction in job completion time (JCT) with intelligent load balancing, significantly increasing AI compute efficiency.
These gains increase as AI environments scale, delivering greater benefits with larger models, bigger clusters, and more network tiers where collective communication dominates. The business impact is straightforward: faster training means higher GPU utilization, quicker model convergence, and lower operational costs.
The G300 powers new Cisco N9364F-SG3 switches, offering 64 ports of 1.6T OSFP connectivity in a compact form factor, delivering breakthrough performance for high-density AI clusters in both 100% liquid-cooled and air-cooled deployments.

Scaling AI with power-efficient infrastructure
As data center power demands grow, efficient infrastructure becomes essential for sustainable AI scaling.
Our new high-radix 102.4T liquid-cooled system reduces overall power consumption and delivers 70% greater efficiency than today’s equivalent bandwidth solution with six 51.2T air-cooled systems.
These advancements minimize energy use, reduce cooling requirements, and extend component lifespan, enabling more cost-effective and sustainable AI data centers.

Scale-across networking: Nexus infrastructure powered by Silicon One P200
Building on our October 2025 Silicon One P200 announcement, we’re bringing scale-across capabilities to Cisco Nexus One with the Cisco N9364E-SP2R fixed switches and N9836E-SP2R modular line card. The fixed systems, powered by the P200 ASIC, feature 64 ports of 800G OSFP or QSFP-DD with quantum safe line-rate encryption and industry-leading deep buffers—optimized for universal spine use cases in distributed and multi-cloud environments. The line card, featuring 36 ports of 800G OSFP, integrates seamlessly with the N9800 modular switches to deliver scalable growth based on customer needs.
This architecture enables organizations to deploy AI workloads across geographic regions, sovereign clouds, or multiple data center sites while maintaining the performance characteristics of a single, unified infrastructure.
Breaking the power wall: 800G OSFP Linear Pluggable Optics
As part of our ongoing commitment to delivering exceptional cost benefits and sustainable innovation, we’re introducing Cisco OSFP 800G Linear Pluggable Optics (LPO) with the Cisco N9364E-SG2X powered by Silicon One G200 series. By integrating signal processing into the Silicon One ASIC, LPO reduces per-module power consumption by 50% and lowers overall system power by 30%. With this breakthrough innovation, organizations can scale AI infrastructure efficiently and meet energy-efficiency targets.
Next-level simplicity: Unified operating model for smarter AI networks
Managing today’s AI networks shouldn’t be complicated. That’s why Cisco’s flexible operating model delivers a seamless, consistent experience—no matter where your workloads live.
We are excited to announce additional enhancements to our Nexus One portfolio:
- Uniform networking and deep visibility: Nexus One provides consistent networking across on-premises, sovereign cloud, underlay, overlay, and Kubernetes environments. With Cisco Isovalent, you also gain deep visibility into service IPs and traffic patterns, eliminating operational blind spots.
- Simplified orchestration and management: Scaling your AI fabric is simpler than ever. Cisco N9000 systems now support cloud management via Nexus Hyperfabric, enabling easy orchestration and scaling across multiple locations with built-in multi-site orchestration.
- Flexibility at the core: Run Cisco NX-OS, ACI, or now SONiC—all on the same Cisco N9000 systems. Protect your investments and choose the operational approach that fits your business best, with no compromises.
- Security built in: Integrated hardware root of trust on every system port and quantum threat-resilient line-rate encryption are foundational. With the new Cisco Live Protect enforcement mode, organizations can automatically deploy kernel-level mitigations in real time, addressing vulnerabilities and zero-day exploits without maintenance windows or downtime.
- Integrated operations and analytics: With new native Splunk integration in on-premises Nexus Dashboard, you get unified analytics and federated search across network telemetry and the Nexus Dashboard Data Lake—no data duplication required. This helps maintain data sovereignty, cut operational costs, and speed up mean time to remediation (MTTR).
But operational simplicity is only the beginning. Cisco is pioneering new innovations for intelligent, autonomous network operations.
Agentic era’s new frontier: AI Canvas brings AgenticOps to life
Traditional network operations models can’t keep pace with the scale and complexity of managing AI infrastructure. That’s why we’re introducing AgenticOps for data center networking through AI Canvas—giving teams the ability to troubleshoot, gain insights, configure, and optimize enterprise infrastructure, including AI fabrics, through guided, human-in-the-loop conversations.
Powered by the Cisco Deep Network Model—built on over 40 years of networking expertise and unified network and security telemetry—the model delivers 20% more accurate reasoning than general-purpose LLMs for networking tasks.
Maximize AI infrastructure investments
At the end of the day, scaling AI isn’t just about networking—it’s about moving faster, innovating boldly, and realizing the full value of your AI investments.
With Silicon One G300 and P200 powering Cisco Nexus One, and AgenticOps making operations smarter, we’re building the foundation to help organizations get the most out of their AI investments.
By unifying silicon, systems, and intelligent operations with integrated security and a consistent operational experience, we’re helping customers improve GPU utilization and accelerate their AI journeys.
Discover how Cisco can help you achieve secure, efficient, and scalable AI success—so you can turn vision into results, from day one to breakthrough.
Additional resources:
Related blogs:
Interesting read. The point about performance being measured in time really resonates—not just for AI workloads, but for everyday digital experiences too. Whether it’s GPUs waiting on the network or users waiting on apps to respond, latency and reliability still make or break the experience.
On a much smaller, consumer-level scale, I was reminded of this recently while browsing a gaming forum. People were discussing mobile casino options, which led me to https://tk999signup.com/app
. I mostly use my phone, so performance and responsiveness matter to me. The installation was straightforward, and switching between games felt smooth, with no noticeable lag. I tried a few rounds in the evening, and the app felt lightweight and comfortable to use. From Bangladesh, it ran reliably and didn’t feel overloaded—which, in its own way, shows how much underlying network quality still matters.
That’s a great point, Daria. Whether it’s a high-stakes AI training job in a data center or someone using a mobile app in Bangladesh, the experience always comes down to the health of the network.
In this “Agentic Era,” we’re moving toward a world where the network basically manages itself. By using things like the Cisco Deep Network Model, we can now predict and fix those tiny hiccups like microbursts or congestion – before they ever cause a lag. The goal is to make the infrastructure so reliable and fast that it becomes invisible, letting the application shine no matter where you are in the world.
LPO and QSFP-DD or OSFP in the context of liquid cooling? Too many cables, too much maintenance. When is cisco releasing a CPO switch with OIF ELSFP compatibility?
You’ve hit on a critical inflection point for data center design. While the blog highlights our latest 102.4T systems (like the Cisco N9364F-SG3) using Linear Pluggable Optics (LPO) to reduce power by 50% per module, we recognize that as we move toward 200T and beyond, the ‘power wall’ and cabling density become even more challenging.
Cisco is actively involved in the OIF (Optical Internetworking Forum) and remains a key contributor to the development of CPO standards, including ELSFP compatibility for external light sources. While the current focus for the ‘Agentic Era’ launch is maximizing the efficiency of the Silicon One G300 with pluggable 1.6T optics – which offers the best balance of serviceability and performance for today’s AI clusters – we are continuously evaluating CPO for future generations where thermal density may necessitate moving the optics inside the package.