Cisco’s data center in Allen, Texas (DC2), was designed to make best use of the high-density Cisco Unified Computing System and Nexus switches. Cisco’s business requirement for high-density computing, supporting up to five Unified Computing System chassis per rack, essentially quadrupled the per-rack power requirements at Texas DC2 compared to target load requirements at our other data centers.
In a high-density compute environment like the one at DC2, increasing our cooling efficiency introduced a few unique challenges, tradeoffs, and opportunities for Cisco IT. After much consideration and calculations, the Cisco IT architects chose to build a high-density compute environment entirely with overhead cooling instead of the raised floor model used at our other data centers. (For more on the drivers behind Cisco’s decision to go with an overhead system, see my video, “Cooling a High-Density Compute Environment,” in the DC2011 interactive.)
In the overhead design at Texas DC2, cold air is blown into the data halls using vertical ducting down the cold aisles. The cold air drops and is drawn inside the cabinets to cool the front of the equipment. Except for perforated holes in the front, the cabinets are enclosed. So instead of blowing hot air out the back of the cabinets, the hot air is contained within the cabinets themselves, up to a temperature of about 120 degrees Fahrenheit, and naturally rises upward through a six-foot chimney on top of the cabinet and vented into a plenum above the service ducts.
A high-density compute environment was just one of several business considerations that factored into Cisco’s plan for populating the data halls at DC2. (See the video, “Populating the Data Halls,” in the DC2011 interactive for more.) Also, check out how the Cisco Unified Computing System has helped IT with application migration (physical and virtual) to the Allen data center in the DC2011 video, “Migrating Applications to Texas DC2.”