In my first blog post, I highlighted some of the benefits being seen by customers using Cisco Unified Computing System™ (UCS) from Case Studies. In posts two and three, I discussed reduction in cabling and provisioning times in more detail. Today I will drill down on power and cooling.
Why are customers seeing a 52% reduction in their power and cooling costs? Through virtualization, reducing overall server counts, but also through a paradigm shift in what constitutes a server solution with the unification of compute, network, storage access, and management. Cisco’s Unified Fabric condenses up to three parallel networks into one, reducing the number of I/O interfaces, cables, and switch ports.
For blade servers, instead of going with a “mini-rack” chassis architecture, Cisco replaced the intra-chassis switches and management modules with Fabric Extenders (FEX) to transfer the unified fabric from the chassis to the Fabric Interconnects. A FEX is a remote line card and does not act as a switch. Compare this simplicity with a common chassis configuration for a competitor: a pair of Ethernet switches, a pair of Fibre Channel switches, and a pair of chassis management modules.
Similarly, Cisco C-Series Rack Servers can have data, storage, and management traffic unified with the Nexus 2200 Series Fabric Extenders and 10GE Virtual Interface Cards (VIC).
VICs are blade and rack server adapters which can be configured into as many as 256 dynamic virtual NICs and HBAs each operating independently. Compare that to a legacy architecture where a server needs four NIC interfaces for production, one for management, and another backup. Now add two additional HBA interfaces for storage. A single VIC can do it all.
All of this allows UCS to scale more simply and cheaply than our competitors.
Here are three customer examples from the original 26 cited in the first blog post.
Microsoft Partner Solutions Center– “Our previous servers were 2U with 128 gigs of RAM, holding around 35 virtual machines [VMs], but using 2.5 watts of power per VM,” says Leonard. “In comparison, the UCS blades are half a U [half slot] with 196 gigs of RAM, can hold around 50 virtual machines, and only use 0.85 watts of power per VM.”
Essar Group – “The simplified design of the Cisco UCS Servers improves airflow efficiency and can reduce the number of components that need to be powered and cooled by more than 50 percent compared to traditional blade server environments.”
Thales – “Also, in many data centers, a dense jungle of cables hampers ventilation of the server, so air conditioning units use accordingly more energy. You immediately notice the cooling effect of having fewer cables, even without elaborate thermometry.”
Would you like to learn more about how Cisco UCS can help you? There are more than 250 published datacenter case studies on Cisco.com. Additionally, there is a TCO/ROI tool that will allow you to compare your existing environment to a new UCS Solution. For a more in-depth TCO/ROI analysis, contact your Cisco partner.