Cisco Blogs

Power & Cooling : More Ways to Skin the Cat

April 30, 2007 - 2 Comments

By now it’s more than clear that power and cooling are one of (if not THE) top concern for data center architects and many CIOs today. Simple data points show that power consumption by servers in data centers has doubled from 2003 (5.6 million) to 2005 (10.3 million). The power needed for these servers and associated infrastructure would require 5 power plants at 1000 megawatts output each to support this load. That was two years ago-Certainly servers are the primary culprit for data center power consumption -and a major area for vendor focus in power reduction. (See the research report from Jonathan Koomey of Lawrence Berkeley National Lab What is not as clear to many data center, storage and network IT professionals is that nearly *all* aspects of data center infrastructure can be optimized for power consumption, as well as lower resulting TCO. Storage and SAN virtualization has grown significantly over the last 2-3 years, and can provide major power savings. See Enterprise Strategy Group’s recent research report on the benefits of SAN switch virtualization Newer yet is the idea of virtualized appliances, be they network components, security systems, or other devices. While server virtualization is now widely known and increasingly deployed, very few data center networks today leverage virtualization to get more use of fixed infrastructure -short of virtual LANS (VLAN) and virtual private networks (VPN), which are 10+ year old technologies. Today, vendors like Cisco are taking virtualization to new levels across the entire set of infrastructure in the data center: storage switches/directors (MDS), application networking (ACE), and security (firewall module for Catalyst). The benefits of virtualization are real and multi-reaching: capex savings, faster and less time-consuming operations, and visibly lower power and cooling. For example on power and cooling, one Cisco ACE module can easily support 50 individual application instances (same app/multiple groups, or 50 different apps). The power/cooling advantages of virtualization for that scenario are pretty compelling: 220 watts/hour for an ACE module vs. 363 watts for a standalone load balancer or app switch. Multiply that out over four years, and you get 7.7 million watts used vs. 12.7 million. Now multiply 12.7 million by fifty point app switches and you’re looking at over 600 million watts vs. under 10 million! The resulting savings for this example is between $330,000 and $500,000+ depending on regional power rates. The same benefits can be seen if you deploy virtualized firewalls or storage switches.So the next time your data center design team looks at virtualization, think beyond the virtual server, and think about the broader network and infrastructure. Your finance team, and power company, will definitely notice.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. A few years from now, I would guess that virtualization is the norm. There are so many advantages by doing it, now that the technology is mature enough and machines have enough power for it.Decreased resource usage per virtual machine, the ability to move VMs, the ability to save VMs etc makes them much more manageable and less costly than the server farms of today.

  2. On the server side, blade systems and other novel methods are showing power improvements. For example, Supermicro has introduced a new system that has 2 independent servers in a 1U chassis. This 1U Twin"" reaps power savings by having both independent systems tap the same power supply. Power supplies are most efficient when ran at max capacity. I've not seen the number but would expect this single power supply to generate less heat that two smaller one. As a result, you save on cooling costs as well."