Early in my days as a Data Center manager I attended a series of talks focused on Data Center energy efficiency. The sessions covered everything from hardware chip design to application performance to physical infrastructure.
Even for a beginner, two things were immediately obvious. First, Data Centers consume more energy than other buildings – much more. Second, with so many different components drawing power there are a lot of opportunities to make a server environment more energy efficient.
One presenter, from a manufacturer of Data Center standby electrical systems, mentioned during his talk that electrical components operate more efficiently at higher loads. The closer they are to maximum capacity, the better they perform.
I thought about this for a while and at the conclusion of the session, asked: “If electrical systems operate more efficiently at higher loads, why do operators of Data Centers with redundant electrical infrastructure split the load evenly between the A and B sides? Why not put the entire load on side A and nothing on side B? Wouldn’t that be more energy efficient?”
To my surprise, the question stumped the presenter. Eventually, one of his co-workers in the audience stood up and said they had conducted experiments with that configuration and found that although it was more energy efficient, when a failure occurred on the A side and the full power load (in his words) “came crashing onto the B side,” the components sometimes failed. The redundant electrical infrastructure could reliably handle a sudden jump from 40 percent loaded to 80 percent, but not from zero to 80 percent.
Oh. Enter my third Data Center lesson for the day: energy efficiency is important, but ensuring availability is much more important.
Speaking of availability and Data Center power, this week’s question explores the use of rotary UPS systems that employ flywheel technology versus traditional battery UPS systems. See below for discussion of the pros and cons of each.
I realized a few years ago that all Data Center challenges can be solved with the sufficient application of money.
Need more computing capability? Buy new hardware. Struggling with hot spots? Purchase supplemental cooling infrastructure. Don’t have enough physical space? Pay to expand the Data Center or lease additional space.
More performance means greater cost, though. Some energy saving technologies buck that trend when compared to conventional facilities, but generally the more capability you want from a Data Center the more it will cost to build and operate.
How long do you expect your electronic gadgets to work for you? Not necessarily how long will devices last before simply breaking, but for what length of time will they usefully perform the functions that you obtained them for?
With technological advances coming faster and faster nowadays -- and older systems therefore becoming obsolete quicker and quicker - plus a growing number of devices that have to keep pace with other online systems in order to remain useful, the useful lifespan of our gadgets seems to be shrinking.
A glance around my home office provides a snapshot of how long much of my electronic paraphernalia has been in operation Read More »
Ah, weather – one of life’s multi-purpose tools. Conversation filler (“Quite the weather we’re having.”), alleged indicator of world’s end and source of inspiration for comic book writers to empower heroes and villains alike.
Weather can also be a Data Center’s best friend. Solar energy can be harvested to help generate power, for instance, such as is happening at Cisco’s Data Center in Allen, Texas. (Look for the 100 kW solar array on the right side of the Data Center’s roof.) Wind energy as well. Rainwater can even be collected for cooling system usage or to irrigate landscaping.
I must confess, the first time I heard about virtual desktop infrastructure it made me think of a scene from the 1985 movie Brazil. (The movie is old enough that I trust I’m not spoiling anything here. If it’s sitting in your Netflix queue and you don’t want anything revealed, though, skip the next paragraph.)
In the scene Sam Lowry, the movie’s main character, struggles to work at his too-small desk that adjoins a nearby wall. The desk shifts, and begins to retract into said wall, causing Sam to yank mightily on it in hopes of recovering some usable desk space. After a brief tug of war, he discovers the source of the problem.
Fortunately, that’s not how virtual desktop technology truly works.
This week’s Data Center Deconstructed question raises the issue of how to determine the ratio of physical servers to virtual desktop instances. As my meandering thoughts of Brazil indicate, I’m not your go-to guy for such information. Ashok Rajagopalan, a product manager in Cisco’s Server Access Virtualization Technology Group, steps in to addresses the topic.