After months of anticipation and countless hours spent on the delivery, I’m happy to announce a new member to Cisco’s family. Our newest Data Center has come into the world in Raleigh, North Carolina. It’s 18,500 sq. ft. (1,719 sq. m.) in size and has 2.88 MW of capacity. The parents are tired but otherwise doing fine.
Interesting news for Data Center industry watchers: growth in Data Center energy usage has apparently slowed.
Researcher John Koomey recently studied the issue at the request of The New York Times and determined that from 2005 to 2010 Data Centers worldwide increased energy usage by a little more than half (56 percent) while those in the United States increased usage by about one third (36 percent).
Early in my days as a Data Center manager I attended a series of talks focused on Data Center energy efficiency. The sessions covered everything from hardware chip design to application performance to physical infrastructure.
Even for a beginner, two things were immediately obvious. First, Data Centers consume more energy than other buildings – much more. Second, with so many different components drawing power there are a lot of opportunities to make a server environment more energy efficient.
One presenter, from a manufacturer of Data Center standby electrical systems, mentioned during his talk that electrical components operate more efficiently at higher loads. The closer they are to maximum capacity, the better they perform.
I thought about this for a while and at the conclusion of the session, asked: “If electrical systems operate more efficiently at higher loads, why do operators of Data Centers with redundant electrical infrastructure split the load evenly between the A and B sides? Why not put the entire load on side A and nothing on side B? Wouldn’t that be more energy efficient?”
To my surprise, the question stumped the presenter. Eventually, one of his co-workers in the audience stood up and said they had conducted experiments with that configuration and found that although it was more energy efficient, when a failure occurred on the A side and the full power load (in his words) “came crashing onto the B side,” the components sometimes failed. The redundant electrical infrastructure could reliably handle a sudden jump from 40 percent loaded to 80 percent, but not from zero to 80 percent.
Oh. Enter my third Data Center lesson for the day: energy efficiency is important, but ensuring availability is much more important.
Speaking of availability and Data Center power, this week’s question explores the use of rotary UPS systems that employ flywheel technology versus traditional battery UPS systems. See below for discussion of the pros and cons of each.
I realized a few years ago that all Data Center challenges can be solved with the sufficient application of money.
Need more computing capability? Buy new hardware. Struggling with hot spots? Purchase supplemental cooling infrastructure. Don’t have enough physical space? Pay to expand the Data Center or lease additional space.
More performance means greater cost, though. Some energy saving technologies buck that trend when compared to conventional facilities, but generally the more capability you want from a Data Center the more it will cost to build and operate.
How long do you expect your electronic gadgets to work for you? Not necessarily how long will devices last before simply breaking, but for what length of time will they usefully perform the functions that you obtained them for?
With technological advances coming faster and faster nowadays -- and older systems therefore becoming obsolete quicker and quicker - plus a growing number of devices that have to keep pace with other online systems in order to remain useful, the useful lifespan of our gadgets seems to be shrinking.
A glance around my home office provides a snapshot of how long much of my electronic paraphernalia has been in operation Read More »