Efficient Application versus Efficienct Infrastructure
I recently heard an interesting comment to the effect that “the problem with all this Green data center stuff is that you only get so much by improving the IT infrastructure and the facilities that support them. The real gains come in the application architecture”. From a purist point of view I would agree that this is correct, infrastructure is designed and deployed to support applications and the services therein.However, (and forgive me if I sound like a former facilities guy here) we need to walk before we can fly in my opinion. I’ve done some calculations along with some fellow green collars tech folks at consortium and data center charets we attend that suggest you can get as much as a 40% operative efficiency gain without even touching the applications. Expressed as a percentage electrical efficiency improvement, we’re talking about 1) reducing total power supplies (SMPS) 2) Increasing distribution voltage 3) Improving IT asset utilization through virtualization 4) high efficiency, close coupled cooling with side air economizers 5) efficient UPS and so on. Emerson does a good job of summing up this methodology in their Energy Logic approach.Now if we consider a data center that is roughly 40% efficient (very typical sadly) from an electrical standpoint (including cooling measured in Watts) and you could move that to 80% that equates to a difference of $400,000 of opex spend on a 1 MW data center costing $1,000,000 per year for electrical supply. Is this not compelling?When we look at application architectures it becomes evident there are massive, probably larger gains that can be made than just focusing on the infrastructure. For example looking at an information stream that is generated as the result of a typical web transaction. Most math says it will hit infrastructure of one form or another somewhere between 100-120 times. Lets for argument sake say that each time it traverses this infrastructure it requires 2 Watts on average (per touch point) to handle that stream… We would be talking best case scenario 200 Watts (100 touches x 2 Watts). Doesn’t sound like much until you consider a use case like say ebay…Now if we can take that application delivery model and streamline it to say 20 touches we now are looking at 40 Watts per web transaction (20 touches x 2 Watts) then we’re talking about a 60 Watt delta per transaction, perhaps a 120 Watt if we count all the supporting facilities power requirements (primarily cooling).So in the case of energy efficient application architectures, the scale can be massive. However, there is no metric or monitoring application that can easily correlate Watts to applications. Since anything we do in a data center requires a business case, to me focusing on application architectures only today may be philosophically correct but is prohibitively difficult to show the value clearly.Maybe I’m missing something here but I believe there is a lot of good work we can do by focusing on infrastructure architectures in the near term that will improve operative efficiency and will also set a good foundation for the planning and analysis of more efficient application architectures later. I think taking an “and” and not an “or” approach to efficiency across applications and infrastructure is our best bet.Weigh in here with some thoughts, I’m curious if anybody else is having this “water-cooler debate”?Some additional good thoughts on this topic on Paul Murphy’s blog on ZDNet.