Cisco a few weeks ago opened a new Data Center in Allen, Texas to fanfare that included media coverage and a ribbon-cutting ceremony with Texas Governor Rick Perry.
We’re now opening a Data Center in Research Triangle Park, North Carolina. The facility features water-cooled cabinets, supports up to 25 kW per rack and has sophisticated monitoring and management tools for controlling power and cooling systems. The Data Center can be configured with different levels of redundancy (up to tier 4), has a calculated PUE below 1.25 and is modular, allowing for rapid expansion.
Oh, and it’s tucked into a 40 ft. long box that can be delivered to your doorstep. That’s right, Cisco’s newest server environment is a containerized Data Center.
The Cisco Containerized Data Center at Cisco's Research Triangle Park campus.
A formal grand-opening isn’t scheduled until August when an in-building Data Center opens its doors on the Research Triangle Park campus as well. But you can watch the video below for a sneak peak at how it was installed as well as catch further discussion about Data Center container capabilities.
Additional information about the Cisco Containerized Data Center is available at www.cisco.com/go/cdc.
Unless you have been living under a rock for the past few years – and perhaps even then – you have undoubtedly heard someone touting the merits of virtualization and cloud computing. Chief among the advantages are reduced costs and the capability to do more with fewer resources.
Although the terms are often used simultaneously, cloud and virtualization aren’t the same. Click below for a brief discussion of each.
I have been involved in a lot of Data Center projects over the years and during the design discussions someone almost invariably observes: “it’s not rocket science. We’re just building a Data Center.”
It turns out there is rocket science in some Data Centers after all.
A handful of server environments now incorporate hydrogen fuel cells, the same technology that helped U.S. spacecraft reach the moon as part of the Gemini and Apollo space missions in the 1960s and are still used in space shuttles today. Data Center industry publications have in recent years reported fuel cells helping power server environments belonging to the First National Bank of Omaha, Fujitsu and Verizon.
Hydrogen fuel cells combine hydrogen and oxygen to create electricity and produce heat and water as byproducts. They typically run on natural gas, which although not a renewable energy does emits less carbon, sulfur and nitrogen than other sources. Probably the best known fuel cell on the market is Bloom Energy’s “Bloom Box” that was profiled by 60 Minutes in 2010.
So, are we at Cisco using fuel cells in Data Centers? Watch below to see why or why not.
We’re formally opening a new Data Center today here at Cisco. In light of that, let’s forgo Data Center Deconstructed’s usual video Q&A and spend some time kicking the site’s proverbial tires.
Located in Allen, Texas, the new Data Center is a tier 3 facility with a 38,000 sq. ft. (3,530 sq. m.) hosting area and powered by redundant 10 MW feeds providing 5.25 MW of capacity for IT.
An overhead view of Cisco's new tier 3 Data Center in Allen, Texas.
I participated in several of the design meetings for the Data Center and am enthusiastic about a lot of the features that have been incorporated into its design. (No surprise, the facility uses all of the green strategies I discussed in Energy Efficiency Makes Two Kinds of Green and then some.) A few of my favorite features:
The active-active configuration. The Allen Data Center is linked to another tier 3 Data Center in Richardson, Texas, so each facility is a primary Data Center that also serves as a secondary facility for the other. Cisco calls the pair a Metro Virtual Data Center – I call it really hard to knock offline. (We like this model so much that we’re planning to build similar pairs in other theaters.)
The server cabinets. As shown in the image below, the Data Center’s cabinets have exhaust chimneys that allow hot air generated by hardware to flow into a plenum space and avoid mixing with incoming chilled air. This helps the cooling system operate more efficiently. (We used a similar design in our Richardson Data Center, too.)
A rotary UPS. If anything in a Data Center’s standby infrastructure is going to fail it’s the batteries, so I’m happy to dispense with a static UPS at this site. The rotary UPS contains a large, spinning flywheel and in the event of a utility power failure that kinetic energy will supply several seconds of ride-through power, long enough to transfer the Data Center’s electrical load to standby generators.
Enclosed cabinets with vertical exhaust ducts (chimneys) help isolate hot and cold airflow.
These are some of my favorites, but they’re just part of what this Data Center has to offer. For a deeper look, check out the interactive videos and detailed case study about the facility. Happy viewing!
Server cabinets typically get no respect when folks try to improve the energy efficiency of their Data Centers. Why would they? Cabinets don’t consume power. They don’t even have moving parts. They’re the second-string of Data Center physical infrastructure, used only so hardware, power strips and patch fields don’t have to sit in a heap on the hosting area floor.
If you’re treating the cabinets in your Data Center like nothing more than shelving units, though, you’re overlooking a useful tool. Choosing the right server cabinet and being strategic about how components are installed within them can optimize airflow, reduce hot spots and even reduce power consumption as the Data Center’s cooling system doesn’t have to work as hard.
Consider their role in dissipating heat produced by high-performance hardware.