Avatar Avatar

Artificial intelligence (AI) compute is outgrowing the capacity of even the largest data centers, driving the need for reliable, secure connection of data centers hundreds of kilometers apart. As AI workloads become more complex, traditional approaches to scaling up and scaling out computing power are reaching their limits. This is creating major challenges for existing infrastructure and network capacity, energy consumption, and connecting distributed components of AI systems.

This blog explores these critical challenges facing AI data centers, examining how both public policy and advanced technology innovations are working to address these bottlenecks, enabling greater energy-efficiency, performance, and scale for a new era of “scale-across” AI networking between data centers.

AI scaling imperative: core challenges for data centers

Interconnectivity bottlenecks: AI workloads demand ultra-high speed, low-latency communication, often between thousands or even millions of interconnected processing units. Traditional data center networks struggle to keep pace, leading to inefficiencies and reduced computational performance. As Europe builds its new AI Factories and Gigafactories, best-in-class interconnectivity will help maximize their computing output.

Distributed workloads (“Scale Wide”): To overcome the physical and power limitations of single data centers, organizations are distributing AI workloads across multiple sites. This “scale-across” approach necessitates robust, high-capacity, and secure connections between these dispersed data centers.

Energy: AI workloads are inherently energy intensive. Scaling AI infrastructure increases energy demands, posing operational challenges, and increasing costs.

Public policy and Europe’s AI infrastructure

Through policy initiatives like the upcoming Digital Networks Act (DNA) and Cloud and AI Development Act (CAIDA), the EU seeks to strengthen Europe’s digital infrastructure. The EU will attempt to leverage these to help develop a robust, secure, high-performance and future-proof digital infrastructure – all prerequisites to succeed in AI.

We expect CAIDA to directly address the energy challenges posed by the exponential growth of AI and cloud computing. Recognizing that data centers are currently responsible for approximately 2 to 3% of the EU’s total electricity demand (and demand in absolute terms is projected to double by 2030, compared to 2024), CAIDA and the EU Sustainability Rating Scheme for Data Centers should seek to streamline requirements and KPIs for energy efficiency, integration of renewable energy sources, and energy use reporting across new and existing data centers. CAIDA could act as a policy lever as the EU seeks to triple its data center capacity within the next 5 to 7 years.

The EU AI Gigafactories project goes exactly in this direction. As the EU and its Member States work to designate the Gigafactories of tomorrow, they will need to be built with the best-in-class technology. This means orchestrating an architecture that integrates the highest compute capability alongside the fastest interconnectivity, all resting on a secure and resilient infrastructure.

Further, the EU’s Strategic Roadmap for Digitalisation and AI in the Energy Sector sets a framework for integrating AI into power systems to improve grid stability, forecasting, and demand response. The roadmap will not only tackle how AI workloads impact energy demand, but also how AI can optimize energy use, enabling real-time load balancing, predictive maintenance, and energy-efficient data center operations.

Digital solutions can help accelerate the deployment of new energy capacity while enabling the AI infrastructure to work better, because it’s not just about bigger data centers or faster chips. For example, routers can now enable data center operators to dynamically shift workloads between facilities in response to grid stress and demand response signals for optimizing energy use and grid stability.

The EU needs a strategic and holistic approach to scale AI capacities, connect AI workloads, make them more efficient, cut AI energy needs, and build stronger protections for its digital infrastructure.

Why connectivity is AI’s prerequisite

Data centers now host thousands of extremely powerful processors (GPUs doing the heavy AI calculations) that need to work together as one giant AI supercomputer. But without a highly efficient “nervous system”, even the most advanced AI compute is isolated and ineffective.

That’s why Cisco built the Cisco 8223 router, powered by the Cisco Silicon One P200 chip. The goal is to bind these processors, enabling seamless, low-latency communication. Without high-speed, reliable interconnectivity, individual GPUs cannot collaborate effectively, and AI models cannot scale. Routing is part of the foundational network infrastructure that enables AI to function at scale, securely, and efficiently. AI compute is important, but AI connectivity is the silent, indispensable force that unlocks AI’s potential.

Five keys to understand why Cisco’s latest routing technology for AI data centers matter

  1. Unprecedented speed, capacity and performance: the new Cisco router is a highly power efficient routing solution for data centers. Powered by Cisco’s latest chip, the highest-bandwidth 51.2 terabits per second (Tbps) deep-buffer routing silicon, the system can handle massive volumes of AI traffic, processing over 20 billion packets per second. That’s like having a super-efficient highway with thousands of lanes, allowing AI data to move from one place to another without slowing down.
  2. Power efficiency:the system is engineered for exceptional power efficiency, directly helping to mitigate the high energy demands of AI workloads and contributing to more efficient data center operations. Compared to a setup from two years ago, with similar bandwidth output, this new system takes up 70% less rack space, making it the most space efficient system of its kind (from 10 to just 3 rack units, RU). This is crucial as data center space becomes scarce. It also reduces the number of dataplane chips needed by 99% (from 92 chips down to one), with a device that’s 85% lighter, helping lower the carbon footprint from shipping. Most importantly, it slashes energy use by 65%, a vital saving as energy becomes the biggest cost and physical constraint for data centers.
  3. Buffer: advanced buffering capabilities absorb large traffic surges to prevent network slowdowns. Sometimes, data comes in huge bursts. A “deep buffer” is like a giant waiting area for data. It can hold onto a lot of data temporarily, so the network doesn’t get overwhelmed and crash.
  4. Flexibility and programmability: the Cisco chip that powers the system also makes it “future-proof.” That means that the network can adapt to new communication standards and protocols without requiring heavy hardware upgrades.
  5. Security: with so much important data, keeping it safe is crucial. Security features must be built right into the hardware, protecting data as it moves. This also means encryption for post-quantum resiliency (encrypting data at full network speed with advanced methods against future, more powerful quantum computers), offering end-to-end protection from the ground up.

Building the digital foundation for European innovation

The future of European innovation and its ability to harness AI for economic growth and societal benefit will be determined by whether it can build and sustain its critical and fundamental digital infrastructure.

A resilient AI infrastructure will need to be built on these five pillars: computing power, fast and reliable connections, robust security, flexibility, and highly efficient use of energy. Each pillar matters. Without powerful chips, AI can’t learn or make decisions. Without high-speed connections, systems can’t work together. Without strong security, data and services are at risk. Without flexibility, adaptation will be too costly or slow. And without power-efficient solutions, AI could hit a wall.

Cisco is proud to provide solutions to build an infrastructure that is ready for the future. We look forward to collaborating with the EU, its Member States, and companies operating in Europe to fully unlock the power of AI.

Authors

Matteo Quattrocchi

Head of EU AI Policy

Government Affairs

Diane Mievis

Director, Government Affairs

Global Policy and Government Affairs