As AI/ML initiatives grow to new heights, the network has become a critical enabler for innovation and business growth. At Cisco, we recognize that today’s AI clusters can no longer be confined within a single data center. Instead, a “scale-across” approach is needed to seamlessly extend AI workloads and optimize resources across multiple data center locations.
Power constraints and resiliency requirements are causing hyperscalers, neoclouds, and enterprises to embrace distributed AI clusters that span campus and metro regions, all of which need secure, high-performing, high-capacity, and energy-efficient connectivity. That’s why we are excited to introduce the Cisco 8223, our latest addition to the Cisco 8000 fixed series, optimized for large-scale disaggregated fabrics within and across data centers, enabling customers to scale AI infrastructure with unmatched efficiency and control.
Meeting the demands of distributed AI
The Cisco 8223 is a power-optimized fixed router, making it ideal for environments with limited power. With 51.2 Tbps of capacity and high programmability, it gives organizations maximum flexibility to adapt to evolving networking needs. Supporting both Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors, the 8223 is purpose-built to connect geographically dispersed AI clusters. In an era where AI model training and inferencing require seamless interconnectivity and massive data throughput, the 8223 stands out with its ability to provide consistent, high-bandwidth, low-latency communication across multiple data centers.
Here are some of the 8223’s key innovations.
Powered by Cisco Silicon One P200
At the heart of the 8223 is the groundbreaking Cisco Silicon One P200 51.2 Tbps, deep-buffer routing chip. This chip delivers unrivaled throughput while maintaining the lowest power consumption, empowering organizations to scale sustainably from small to massive traffic backbone networks.
Deep-buffer and high-radix architecture
The 8223’s deep-buffer design provides ample memory to temporarily store packets during congestion or traffic bursts, an essential feature for AI networks where inter-GPU communication can create unpredictable, high-volume data flows. Combined with its high-radix architecture, the 8223 allows more devices to connect directly, reducing latency, saving rack space, and further lowering power consumption. The result is a flatter, more efficient network topology supporting high-bandwidth, low-latency communication that is critical for AI workloads.
Flexible operating system support
The Cisco 8223 supports a variety of network operating systems (NOSs), including open-source options such as SONiC, with planned support for Cisco IOS XR. Additionally, Cisco Nexus 9000 Series Switches, powered by Silicon One P200, are planned to offer support for NX-OS. This approach meets a wide range of requirements and allows customers to tailor their networks to meet their specific needs.
Integrated security
With inline MACsec for data-in-transit protection, the Cisco 8223 enables seamless networking to support secure, latency-sensitive AI workloads across distributed data center environments.
Advanced optics
Support for coherent optics up to 800GE in OSFP and QSFP form factors is provided, enabling high-speed, long-distance connectivity for high-performance networking.
Compact, scalable form factor
The Cisco 8223 seamlessly integrates into existing infrastructure or serves as the foundation for new deployments. Its compact 3 RU fixed form factor design offers operational simplicity and ease of scaling, whether enhancing current capabilities or building out new distributed AI clusters.
Product benefits include:
- Accelerated time-to-value: A deep integration with multiple NOSs, including open-source software ecosystems, delivers agility and operational flexibility.
- Enhanced sustainability: Industry-low power consumption supports green IT initiatives and long-term cost savings.
- Performance optimization: Deep buffers prevent packet loss, and inline MACsec ensures your data is protected end to end.
Shaping the future of AI Networking
The Cisco 8223 ushers in a new era of scalable, secure, and efficient networking for distributed AI clusters. As your organization looks to leverage the power of AI across regions and data centers, the 8223 ensures your network can keep pace by delivering the performance, flexibility, and peace of mind that today’s digital leaders demand.
Visit Cisco during OCP Global Summit at Booth #B22, where these fixed systems will be on display.
Check out the Cisco 8000 Series Routers landing page
Additional Resources:
Interesting. I remember how Cisco was averse to deep buffers saying it doesn’t solve the problem. So now Cisco is falling in line behind its competitors.
Thank you for your comment. The context in which to apply a solution like a deep buffered router is important. While routers with deep buffers are not needed for every use case, for the “Scale-Across” AI use case we are talking about here, the long distances (several hundreds of kilometers or more) that packets have to travel between data centers mean higher round-trip times and a higher chance that packets in flight may get dropped due to congestion or failure scenarios thereby significantly degrading job completion times (JCT). Deep buffers help prevent this scenario by holding the packets temporarily, avoiding drops and significantly improving JCT and thus performance of AI workloads
I certainly love to see Cisco moving into AI infrastructure and trailblazing as the company has historical done. I also hope the company remains well ubiquitously integrated with the edge customer and user base, and never cede that ground to competitors or just indifference. Keep up the good work and exciting communications development.
Thanks Tony! We’re working with a great sense of urgency here at Cisco and excited about all the AI infrastructure innovation we’re bringing for our Hyperscaler, NeoCloud, Service Provider and Enterprise customers. Stay tuned for lots more to come!
Welcome to the 800G club – 7 years after everyone else. Way to set a “benchmark”. BTW 102.4Tbps switches have been announced by your competitors – just saying.
Thanks for your comment! It’s true that 51Tbps switches and 100Tbps data center switches have been announced. Such switches are typically used only inside a data center for scale-out use cases and are typically shallow buffer, fixed pipeline (meaning limited programmability) devices that do not feature security capabilities like MACSec.
However, the Cisco 8223 is the industry’s first fixed form factor (meaning single chip system) router supporting 800GE interfaces. This device, built with Cisco’s 51.2Tbps Silicon One P200 chip, has deep buffers, is highly programmable allowing for a wide variety of use cases – current and future – and has security features like integrated MACSec allowing for encryption of data in flight. It is also much better suited for the new class of scale-across AI use cases we’re seeing emerge in the industry.
Hope that clarifies!
I thought this was an AI announcement?
I now see its just a priduct announcement with the OS to follow – hence Cisco proprietary SONiC ( which by the way is trying to say ride on the coattails of “openess” when it is a properietary fork) followed by IOS-XR and NX-OS.
So was this click bait when you talked about AI? My response was on that front , where Cisco is 7 years too late.
Also 800G deep buffer chips have been announced by your competitors as well, however they do function at 25.6Tbps. So are you claiming leadership in the routing space with an AI announcement?
If it is AI glory you seek, then please read the new UEC 1.0 spec as that will literally eliminate PFC and reduce buffer requirements. Those specs have already been announced with devices compliant also available.
So on the AI front you are behind unless now you want to claim leadership in the SP space?
In one swoop you have informed the market that you are struggling in the AI space so well done! Or now you will inform me that its just the DDC’s we plan to compete with where the competion does already have 800G. Question is do you also slice packets into “cell”? Maybe you do. After all the founders of Leiba were also at Dune 😉
I now realise this was a DCI announcement riding on the “AI” hype cycle.
Lol, bro doesn’t even know what he’s talking about. Nobody has had 800G gear on the market let alone 800G routing gear. Cisco’s G200 chip has also been available and on the market for the switching space for a while now. Look at Arista, they’re just starting to roll out 800G gear too, nobody is “7 years late to the game”, lol. You’re just an idiot blowing smoke.
Broadcom TH5 – release date Aug 2022
Marvel Innovium Teralynx – release date mid 2024
Jericho3-AI – release date April 2023
Q3D – release date Aug 2023
Jericho 4 – release date Aug 2025
So maybe not 7 years. But shipping products and announcing them are two different things. So more like 3-4 years behind the market. I do stand corrected think-its not 7 years, Cisco is just 3 or 4 years behind in genral on 800G and second to announce an 51.2Tbps 800G deep buffer platform.
Hence this not a benchmark but more of an announcement by a company that desperately needs a win even if it is marketing. By the time the product ships we will have 100Tbps chips in the DC space and Jericho4 mass availability.
So welcome to the club. You are following in the footsteps of trail blazers.
Hi , Mr is too funny : In Fact , You are talking about a Switch not a Router Mister; Just toi Clarify , CISCO has announced the same capability at the same Period , please read below the announcement was made :
Broadcom began sampling the Tomahawk 5 (TH5) in October 2022 and started shipping it in production volume in March 2023. The sampling of a co-packaged optics version of the TH5 was announced in March 2024. he BCM78900 architecture delivers complete L2 and L3 switching with routing cappabilities. NOT a CORE Router .
https://www.broadcom.com/products/ethernet-connectivity/switching/strataxgs/bcm78900-series
Now you can read the announcement was made by CISCO at same time :
Cisco first unveiled its 800G innovations in October 2022 and announced further 800G innovations in March 2023. The initial product announcements included switches like the Nexus 9232E and 8111 at the Open Compute Platform Summit in October 2022. Here you have the Launch….
https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2023/m03/new-cisco-800g-innovations-help-to-supercharge-the-internet-for-the-future-by-improving-networking-economics-and-sustainability-for-service-providers-and-cloud-providers.html#:~:text=SAN%20JOSE%2C%20Calif.%2C%20March,that%20remains%20unconnected%20or%20underserved.
TO your Point , we are bearing from AI in ONLY CLOUD without Data Centers Data Access to AI through the EDGE . So now we are stating to deploy a new Cloud-Edge-Dc Dynamic EXABYTES and trillions of Tokens will be transferring from DC to EDGE and then to Cloud and viceversa. So in this back and fore journey there is multiple routers with Secure AI Capabilities as Router not only switches. This is a powerful reason to made this announcement.
With all my respect , i hope this can help to clarify dear MR. is too funny 😀
That’s a nice huge thing~
Leave removed and HTML formatting.
Menarik,artikel ini agak berguna kepada pekerjaan saya yang seterusnya
I’m excited to see how the Cisco 8223 router will enhance distributed AI networking! The advancements in optimizing resources across data centers are crucial for future AI workloads. Looking forward to more updates!
Excited to see how the Cisco 8223 router will enhance distributed AI networking! This seems like a game changer for managing workloads across data centers. Looking forward to the innovations it brings!
Photo send Karo
801*********@axl
This presentation shows how distributed AI networks have become essential in the age of big data. Innovations like the Cisco 8223 router represent a major step toward building smart, scalable infrastructure. It’s exciting to see how technology is evolving to meet the demands of modern artificial intelligence.
alert(0)
‘-prompt(odeeee)-‘
–>
test
test
Emailme
chill
test
<img src=x onload=alert('xss')
hello
test
test2
test123