By now it’s more than clear that power and cooling are one of (if not THE) top concern for data center architects and many CIOs today. Simple data points show that power consumption by servers in data centers has doubled from 2003 (5.6 million) to 2005 (10.3 million). The power needed for these servers and associated infrastructure would require 5 power plants at 1000 megawatts output each to support this load. That was two years ago-Certainly servers are the primary culprit for data center power consumption -and a major area for vendor focus in power reduction. (See the research report from Jonathan Koomey of Lawrence Berkeley National Lab http://enterprise.amd.com/Downloads/svrpwrusecompletefinal.pdf). What is not as clear to many data center, storage and network IT professionals is that nearly *all* aspects of data center infrastructure can be optimized for power consumption, as well as lower resulting TCO. Storage and SAN virtualization has grown significantly over the last 2-3 years, and can provide major power savings. See Enterprise Strategy Group’s recent research report on the benefits of SAN switch virtualization http://www.cisco.com/en/US/netsol/ns674/networking_solutions_sub_solution_home.html Newer yet is the idea of virtualized appliances, be they network components, security systems, or other devices. While server virtualization is now widely known and increasingly deployed, very few data center networks today leverage virtualization to get more use of fixed infrastructure -short of virtual LANS (VLAN) and virtual private networks (VPN), which are 10+ year old technologies. Today, vendors like Cisco are taking virtualization to new levels across the entire set of infrastructure in the data center: storage switches/directors (MDS), application networking (ACE), and security (firewall module for Catalyst). The benefits of virtualization are real and multi-reaching: capex savings, faster and less time-consuming operations, and visibly lower power and cooling. For example on power and cooling, one Cisco ACE module can easily support 50 individual application instances (same app/multiple groups, or 50 different apps). The power/cooling advantages of virtualization for that scenario are pretty compelling: 220 watts/hour for an ACE module vs. 363 watts for a standalone load balancer or app switch. Multiply that out over four years, and you get 7.7 million watts used vs. 12.7 million. Now multiply 12.7 million by fifty point app switches and you’re looking at over 600 million watts vs. under 10 million! The resulting savings for this example is between $330,000 and $500,000+ depending on regional power rates. The same benefits can be seen if you deploy virtualized firewalls or storage switches.So the next time your data center design team looks at virtualization, think beyond the virtual server, and think about the broader network and infrastructure. Your finance team, and power company, will definitely notice.