Avatar

As a citizen of the world and the father of two young boys, I am acutely aware of the long-term effects of global climate change. I wake up every day thinking about what we can do to alter our path and minimize climate change. In my personal life, I’ve made changes to reduce my carbon footprint, which is great, but I want to do more. I’d like to make a difference in my professional life too.

One of the great things about working for an industry-leading company like Cisco is the opportunity to make this kind of change. As a leading supplier of critical technology to data center customers, Cisco can positively impact electricity consumption and greenhouse gas emissions.

Why is Reducing Consumption So Important?

In 2018, data centers were estimated to have consumed 205 TWh/yr of electricity worldwide, up from 194 TWh/yr in 2010. Over this time period of relatively modest increases in energy consumption, traffic through data centers has increased 5-6X. Several competing forces are at work. Internet traffic is increasing as broadband and wireless speeds increase, enabling ubiquitous high-definition video streaming and all the services of the mobile app economy. Internet traffic is also increasing as businesses move IT from internal networks onto cloud services. On the other hand, because of scaling effects and higher equipment utilization, cloud-based data centers are inherently more energy-efficient than separate enterprise facilities, while also having more advanced and efficient facilities design, operation, and installed equipment.

Because of the increasing demand for cloud services, power efficiency continues to be a big customer ask. Hyperscale service providers, who operate the world’s largest data centers, are among these customers. These customer data centers are among the most efficient in the world, achieving Power Usage Effectiveness (PUE) ratios less than 1.11 versus an industry average of 1.67. (PUE is a measure of data center energy efficiency: the ratio of total facility energy compared to the energy used by just the compute, storage, and networking IT equipment. If a data center used no net energy for lighting cooling and other facility uses, its PUE would be 1.0)

Even with a PUE of 1.0, our customers are still very interested in IT equipment energy consumption. Further increases in data center throughput are constrained by the total amount of power that a data center facility has available. As a result, hyperscale providers who want to deploy higher bandwidth systems to accommodate continually increasing traffic have no choice but to look for systems with even better power efficiency.

Changing the Power Consumption Paradigm

Although we’ve made significant technology advances in optics, silicon, and systems design over the years, the limiting factor in building new, higher capacity systems is efficiently managing the power required to cool down active components.

To break this barrier with Cisco Silicon One, we knew a different approach was required, challenging every assumption, and laying the groundwork for systems that could deliver a quantum leap in capacity and power efficiency.

When Cisco released the NCS 6008 in 2014, designing a 10Tbps system required as many as 2,300+ distinct chips – 50 NPUs, 50 fabric interfaces, and 1750 DRAM (to name a few) – assembled into 58 pieces of hardware inside a 48RU chassis. If you’re not familiar with RUs, 48RU is the same size as basketball Hall-of-Famer, Shaquille O’Neal! And a system this size has large power requirements, consuming nearly 96,630 kWh of electricity per year, about the annual electricity consumption of 9 typical U.S. houses in 2018.

Now, all this can be done with one chip – the Cisco Silicon One, reducing the physical size of the system from 48 RU to 1 RU. Moore’s law states that the number of transistors in a chip doubles every two years, leading to a doubling of density in that same time period. The level of advancement with Silicon One is out-pacing beyond Moore’s Law by an impressive 3x.

Environmental Impact

The Cisco 8201, our new 1RU fixed system, based on a single Cisco Silicon One Q100 device, provides 10.8Tb/s of network bandwidth while using only 415W of power, a whopping 163x increase in power efficiency over our 100G ASR 9000 systems shipped in 2012.  The Cisco 8818 modular routing system, which provides 260Tb/s of network bandwidth,  is 86% more power-efficient than the NCS 6008 using the 2T line-cards, and 89% more power-efficient than the ASR 9000 product families using the 8x100G line-cards. Our goal in advancing power efficiency doesn’t stop with the silicon architecture. We captured gains across the entire system, for example, we were able to decrease the power required for memory in the data plane (per Gbit) by a staggering 98%.

These new systems also have a positive impact on the environment with their reduced transport footprint. Previous systems required 10 pallets of equipment shipped that weighed 2,000 lbs. (or 900 kg) and had a footprint 570 ft3 (or 16 m3); now, we ship one box that weighs 32 lbs. (or 14.5kg) and has a transport footprint of 2.8 ft3 (or 0,07 m3). That’s a 62X reduction in shipping weight and a 202X reduction in shipping transport volume. Together, that’s a massive reduction in carbon emissions and packaging; magnifying the environmental impact.

 

Personally Rewarding

Although just a few months have passed since the launch of the Cisco 8000 series, I can tell you it’s been exciting and rewarding to see some of our major customers already adopt the new systems. This change defines a new era – where we can expect more gains in power efficiency, and at the same time, growth in system capacity to support the Internet for the future.

Personally, working on the Cisco 8000 and the Cisco Silicon One project has been very fulfilling, as an engineer who loves cutting-edge innovation and as a father who wants to help build a better world for his children and future generations.

This is just the beginning of something truly amazing.



Authors

Rakesh Chopra

Cisco SVP & Fellow

Common Hardware Group Architecture and Platforming