I have been involved in a lot of Data Center projects over the years and during the design discussions someone almost invariably observes: “it’s not rocket science. We’re just building a Data Center.”
It turns out there is rocket science in some Data Centers after all.
A handful of server environments now incorporate hydrogen fuel cells, the same technology that helped U.S. spacecraft reach the moon as part of the Gemini and Apollo space missions in the 1960s and are still used in space shuttles today. Data Center industry publications have in recent years reported fuel cells helping power server environments belonging to the First National Bank of Omaha, Fujitsu and Verizon.
Hydrogen fuel cells combine hydrogen and oxygen to create electricity and produce heat and water as byproducts. They typically run on natural gas, which although not a renewable energy does emits less carbon, sulfur and nitrogen than other sources. Probably the best known fuel cell on the market is Bloom Energy’s “Bloom Box” that was profiled by 60 Minutes in 2010.
So, are we at Cisco using fuel cells in Data Centers? Watch below to see why or why not.
A lot of our employees, especially salespeople, seem to work everywhere except at their desks. Reaching them used to mean making multiple calls to multiple numbers, and leaving messages at each one. And waiting for an important phone call sometimes meant that you were tied to your desk until it came through.
Now, with Single Number Reach (SNR) — a feature of Cisco Unified Mobility — I can receive business calls wherever I want to be reached at the moment--at my desk, at home, or on my mobile phone. And if I can’t answer, Cisco Unified Mobility gets all my messages sent to a single voicemail box. There’s also a Mobility feature that lets me transfer calls from my office phone to my mobile phone, and back again – without anyone on the other end knowing I’ve changed phones. This helps when I pick up an important call at my desk, but need to take care of something that takes me away from the desk phone. Sometimes I’ve got to get in the car and can use my Bluetooth headset to finish the conversation.
My current SNR profile is configured to route calls to my mobile inside of normal working hours, and then to push them to voicemail on weekends. I even have an access control list (ACL) to allow my manager’s calls to pass through to the mobile number at any day/hour. He does respect normal work hours but we do know emergencies happen from time to time and it is important to be accessible.
All of these Cisco Unified Mobility features were made available to 80,000 phones in our company, by activating them on in our eighteen production Unified Communications server clusters around the world. The truly impressive thing about the Cisco Unified Mobility service is that it can scale to companies of any size. The benefits to the individual user apply no matter if you are an 8 or 80,000 person company. Mobility benefits the individual most.
From our deployment activity, we learned valuable lessons for our customers about implementation decisions, feature adoption by users, and the resulting business benefits.
In fact the vast majority of our sales force in Africa, 7 out of 10 employees, now have access to this flagship communication and collaboration tool at their local Cisco office. This means they can meet face to face in a life-size virtual meeting with colleagues, customers and partners across the globe without the need to travel, as if they were sat in the same meeting room just across the table from one another. So what are we doing for the remaining 30% of employees on the continent who do not have access to this capability?
We’re formally opening a new Data Center today here at Cisco. In light of that, let’s forgo Data Center Deconstructed’s usual video Q&A and spend some time kicking the site’s proverbial tires.
Located in Allen, Texas, the new Data Center is a tier 3 facility with a 38,000 sq. ft. (3,530 sq. m.) hosting area and powered by redundant 10 MW feeds providing 5.25 MW of capacity for IT.
An overhead view of Cisco's new tier 3 Data Center in Allen, Texas.
I participated in several of the design meetings for the Data Center and am enthusiastic about a lot of the features that have been incorporated into its design. (No surprise, the facility uses all of the green strategies I discussed in Energy Efficiency Makes Two Kinds of Green and then some.) A few of my favorite features:
The active-active configuration. The Allen Data Center is linked to another tier 3 Data Center in Richardson, Texas, so each facility is a primary Data Center that also serves as a secondary facility for the other. Cisco calls the pair a Metro Virtual Data Center – I call it really hard to knock offline. (We like this model so much that we’re planning to build similar pairs in other theaters.)
The server cabinets. As shown in the image below, the Data Center’s cabinets have exhaust chimneys that allow hot air generated by hardware to flow into a plenum space and avoid mixing with incoming chilled air. This helps the cooling system operate more efficiently. (We used a similar design in our Richardson Data Center, too.)
A rotary UPS. If anything in a Data Center’s standby infrastructure is going to fail it’s the batteries, so I’m happy to dispense with a static UPS at this site. The rotary UPS contains a large, spinning flywheel and in the event of a utility power failure that kinetic energy will supply several seconds of ride-through power, long enough to transfer the Data Center’s electrical load to standby generators.
Enclosed cabinets with vertical exhaust ducts (chimneys) help isolate hot and cold airflow.
These are some of my favorites, but they’re just part of what this Data Center has to offer. For a deeper look, check out the interactive videos and detailed case study about the facility. Happy viewing!
Picking up on my blog last week about how Cisco’s own collaboration solution, our Integrated Workforce Experience or IWE, is showing tangible results of business value, I’d like to talk today about another great example at Cisco. This particular case shows how IWE significantly cut down email volumes.