Cisco Blogs

Efficient Application versus Efficienct Infrastructure

October 6, 2008 - 8 Comments

I recently heard an interesting comment to the effect that “the problem with all this Green data center stuff is that you only get so much by improving the IT infrastructure and the facilities that support them. The real gains come in the application architecture”. From a purist point of view I would agree that this is correct, infrastructure is designed and deployed to support applications and the services therein.However, (and forgive me if I sound like a former facilities guy here) we need to walk before we can fly in my opinion. I’ve done some calculations along with some fellow green collars tech folks at consortium and data center charets we attend that suggest you can get as much as a 40% operative efficiency gain without even touching the applications. Expressed as a percentage electrical efficiency improvement, we’re talking about 1) reducing total power supplies (SMPS) 2) Increasing distribution voltage 3) Improving IT asset utilization through virtualization 4) high efficiency, close coupled cooling with side air economizers 5) efficient UPS and so on. Emerson does a good job of summing up this methodology in their Energy Logic approach.Now if we consider a data center that is roughly 40% efficient (very typical sadly) from an electrical standpoint (including cooling measured in Watts) and you could move that to 80% that equates to a difference of $400,000 of opex spend on a 1 MW data center costing $1,000,000 per year for electrical supply. Is this not compelling?When we look at application architectures it becomes evident there are massive, probably larger gains that can be made than just focusing on the infrastructure. For example looking at an information stream that is generated as the result of a typical web transaction. Most math says it will hit infrastructure of one form or another somewhere between 100-120 times. Lets for argument sake say that each time it traverses this infrastructure it requires 2 Watts on average (per touch point) to handle that stream… We would be talking best case scenario 200 Watts (100 touches x 2 Watts). Doesn’t sound like much until you consider a use case like say ebay…Now if we can take that application delivery model and streamline it to say 20 touches we now are looking at 40 Watts per web transaction (20 touches x 2 Watts) then we’re talking about a 60 Watt delta per transaction, perhaps a 120 Watt if we count all the supporting facilities power requirements (primarily cooling).So in the case of energy efficient application architectures, the scale can be massive. However, there is no metric or monitoring application that can easily correlate Watts to applications. Since anything we do in a data center requires a business case, to me focusing on application architectures only today may be philosophically correct but is prohibitively difficult to show the value clearly.Maybe I’m missing something here but I believe there is a lot of good work we can do by focusing on infrastructure architectures in the near term that will improve operative efficiency and will also set a good foundation for the planning and analysis of more efficient application architectures later. I think taking an “and” and not an “or” approach to efficiency across applications and infrastructure is our best bet.Weigh in here with some thoughts, I’m curious if anybody else is having this “water-cooler debate”?Some additional good thoughts on this topic on Paul Murphy’s blog on ZDNet.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Great article. This encapsulates many issues we currently face. I manage the corporate data center at a large retail company. I'm curious if and when you see these issues impacting non-technology companies? Perhaps a better question is when do you think non-technologies companies will start noticing these issues? There is no doubt in my mind that virtualization, server efficiency, depreciation schedules, and storage management practices affect us. But when I raise any of these issues, I tend to get blank stares from my colleagues. Thanks again,

  2. Good post, thanks for sharing.en and the art of Data Center Greening"" (cool title btw!) brings up a good point not addresses in this post - how does code work in the multi-core world affect the efficiency of application architectures and the ability to scale across different infrastructure groups?Any thoughts from our Cisco base on this? Drop Zen an note at the URL above.Thanks."

  3. Great points Alan, thanks for sharing.I agree with your skepticism on Green Data Centers"" as that naming convention is entirely oximoronic. It also lends itself to a ""who's on first"" discussion straight out of the gate.This is why you here us refer to some of things we can do here as ""energy efficient"" which is quantifiable and able to contrasted with other business priorities like SLA's, quality of service, risk, cost, etc. Check out if interested.Again, agreeing with you here that there needs to be an end to end consideration around...really any big change in a DC. So I think one of the best thing a vendor can do is to provide the baseline data a user needs to build that end to end analysis. Some of our competitors differ in this approach and throw big marketing $'s at high profile Green marketing using bad or incomplete math.You very last point Alan is a great one and I love the ethanol comparison. Let me give this a shot - with ethanol there are well understood elements that react chemically and thermally through a given process. The reaction of these elements is well planned and understood and follows a completely standardized and tightly controlled process. So ""if I do this"" then ""I get that"".Now looking at data centers there is very little standardization past the infrastructure system level (by system in this case I mean servers, storage,, network and mission critical facilities). There is even less standardization in application systems. So lack of standardization in hardware and software is our first obstacle. Second, the hard and soft systems change very regularly like blades and video as examples. Last, the human part of it which is keeping up with all the interoperability, modeling, planning and fire drills from a business that change their requirements frequently and sometimes radically.Cisco is a member of The Green Grid where we discuss these issues regularly. While I can't speak for that group I can tell where we've chosen to invest is in developing the skills internally to develop and analyze the use case, infrastructure and application architectures at play in data centers and how we can take a common theme like energy efficiency and improve upon it without hindering the other elements - risk, cost, time, etc.I would close with this analogy to support Alan's comment - ""don't go on the 7 pound diet if it involves cutting of your head"" = )"

  4. Much rationaization has been accomplished within the infrastructure and app layers. Our experience is that rationalizing at the business service and process level further magnifies your returns.

  5. I may be mistaken but is there anything more after the last sentence? I mean,For example looking at an information stream that is generated"" So you are disagreeing with ""The real gains come in the application architecture” Am I right?"

  6. I am basically in agreement with you. See my comment in

  7. I completely agree with you. I have been writing recently about cloud computing, what effect it has on the data centre, both as an early adopter running 'private' clouds and when, eventually, you will be able to move resources into a service provider 'public' cloud. Interestingly, the components to drive sustainability (Context: business sustainability, not 'Green-washing :-)') and efficiency in the data centre (power, cooling and operations) are there. Utility computing, automation and the underlying infrastructure, albeit vendor fragmented, is quite mature. The real issue I see is application architecture and best use of virtualisation technologies. If you look at large ERP implementations as an example, it's still the dedicated box(es) mentality which is over-scoped and under-utilised.Nexus is a game changer in the network space, VMware's DCOS looks interesting, and x86 servers are now commodity items. But until application architectures can leverage point-in-time compute or move to more of a scale up web-services model, I think enterprise will struggle to get the next 30-40% efficiency savings from data centres after consolidation, virtualisation, and floor layout/air flow optimisation.

  8. Is this not compelling?"" Depends. It's definitely an interesting idea worth taling about, but it would only be compelling if:1) There was no change in the application performance and user experience. You don't really go into details about the end result of this optimization. Sure, you save money by ""improving IT asset utilization through virtualization,"" but do the apps suffer? Has that been measured? What do the apps look like the newly efficient DC? If the user experience deteriorates, it's not compelling. 2) There's not an accompanying increase in management costs. If you save money on hardware changes alone, but then have to pay more in management costs (either with more hardware/software or in head costs) then is it really worth it? What if you have to hire more IT staff, who has to drive into an office, which causes more carbon emissions, etc? So it's awesome to shoot for a Green DC with hardware changes, but what's the final benefit calculation? It's Cause & Effect: decreasing cooling will impact everything from your savings down to the aluminum worker in a factory. Until we have an end-to-end equation (as we do for Ethanol now, for example), I'm highly skeptical of the Green DC. :)- Alan"