Cisco Blogs

Application Delivery Metrics

November 10, 2008 - 4 Comments

We’ve been working to support a large customer over the last several months in driving up operative efficiency around IT Processes. We’re to the point where we’ve helped them benchmark their operative electrical efficiency (using Green Grid’s PUE and DCiE) and we’ve provided them with a Watts per transaction metric. This customer is a retailer so in this case the transaction model is fairly straightforward, input from the cash register all the way to the back end storage in one of several regional data centers.So to build the watts per transaction model, all we did was inventoried the infrastructure that supports the creation, transmission and storage of each transaction and normalized it to provide an average Watts per transaction metric. This allowed us to take the next step which was to analyze each infrastructure set at each stage to determine what “energy overhead” was there. No surprise we found servers vastly underutilized and in branch environments when they didn’t need to be, we found extra switch/exchange points and storage that was geographically “siloed”. So in this case we were able to make a simple set of recommendations covering changes in the IT architecture that reduced the total Watts per transaction by roughly 12% (to be validated as we are also implementing energy monitoring post redesign).So this is a simple case of looking at systems and architectural level operative efficiency. And I can tell you this operation is run well and has simplicity and efficiency as pervasive mindsets in IT. Oh that and they work really well with their facilities department. So, a Watts per transaction model can be infinitely more complex under a different use case. So, fast forward to what we’re looking at next that I’m hoping to get some input on.Has anyone developed or seen what they believe to be a telling (realistically accurate within 1-3%) metric that would apply to application energy requirements? For example, if we look at a typical web transaction (say buying your new Klean Kanteen to get of bottled water) there are estimates that say the information stream that is generated by this order hits separate transaction points ~120 times. In many cases these transaction points also hit different infrastructure sets (i.e. spinning up multiple severs, VM or physical to handle part of the transaction, core, access and storage switching, etc). When we position a streamlined application delivery model with only ~20 touch points there is of course an energy benefit to that.What I’m having a hard time determining is a normalized Watts per application model that can provide an indicative figure across the server, storage and appliance sets that can be correlated to a larger application delivery architecture. The biggest challenge here is not the power required by chips to process transactions, it’s the very high degree of customization we see across different IT architectures, Like the old quote “we want to control the wheel, therefore we reinvent it”.I have some what I would call defensible estimates but wanted to check with you all to see if anyone has seen any work in this space that is compelling. A second question, would an application delivery model showing energy allocation be of interest if we were to publish? It of course gets me all giddy to think of the implications…Thanks, happy Greenin’

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Hi Sean,Good point of clarification, how do you represent a normalized (average over time) kW/hr per transaction?What we've done with this large retailer is in essence a lot of % work across different infrastructure sets. For example at one of their many branch locations we have an IBM POS (desktop) cash register consuming about 220 Watts per hour (nameplate) which we we first normalize by removing 30% of the nameplate = 154 W/hr (we apply this to all infra) and then further normalize by dividing 154 W/hr's by the average number of total transaction that POS unit performs (720 transactions per day, roughly 1 every 2 minutes). This is our first data set.We then trace the transaction path all the way to storage in the data center using the same basic methodology. The next transfer point is the branch back office which has 2 x 400 W/hr servers and a small all in one switch/router 1 x 160 W/hr. We then jump to the front end core of their data center as they don't own the service provider infrastructure. Across the data center we have of course multi use and highly modular equipment which makes it more complex but basically the same basic formula applies with one new variable which is the estimated percentage of chip allocation required to move a single transaction. For example, they have a volume (1U) server farm (400 units) that primarily handles this transaction load so we took an estimate to say that 90% of the power required for these servers goes to the transactions. We took the same agreed upon percentage approach for core, access and storage switching and even went so far as to break out the LUN's in storage.What we were left with was a rationalized Watts per transaction (average out across 24 hours) number. We then went through the same process with new approaches to the network architecture which showed big gains in a before and after scenario.Forgive for not being more specific with figures but there is only so much I can share.To summarize if you have a use case model (in this case a common retail transaction) and can inventory the infrastructure along the transactions path you get a general sense of what energy is needed to make that transaction happen across the architecture. This is your benchmark by which you can analyze potential improvements.Your point on the temporal is spot on, a Watts/hr per transaction is truly telling if you want to be exact. However, for the sake of this exercise it was enough to average out the Watts over a 24 hour period and by total daily transactions.Sadly, I don't have more reading on the metric as I don't believe it exists (at least not at the industry level). I'm hoping that maybe some folks out there have done some work here and we can compare notes on the best way to approach a Wattage measurement for the logical elements (apps, transactions, VM's, VLAN/SAN's, etc). I fear consensus on something like this will be difficult at best as the number of variables here is mind boggling if one really wants to be exact.If the industry had this I tend to think IT bill back for power becomes much more prevalent and business unit bill back in turn.Thanks for the comment Sean!

  2. Hi Rob,Thankyou for sharing this information

  3. Hi Rob,Hope I'm not coming off as a smartass here, but a Watt is a rate, so it is already divided by time (1 Watt = 1 Joule/Second). So above, when you're dividing the percentage of nameplate draw by an hour you're really supposed to be multiplying by an hour. Or, put another way, you're billed in KW*h, not KW/h or even KW. Your numbers end up being correct, the units are just off, and I'd hate to see the work get dismissed by a customer for what is otherwise a small error.Back to more practical matters, I think virtualization will be the catalyst for more accurate measurements of power draw on shared infrastructure, since all the CPU time is usually recorded. Shouldn't be that much of a leap to use all that information to figure out the relative use of power within a virtual switch, router, server, or widget.You're doing some good work here -- power is a huge operating cost but all the details are hidden (for the moment). I'll be keeping an eye out for more research on the topic.Sean

  4. Maybe I'm just being pedantic, but are you measuring watt-hours per transaction"", or ""watts per transaction""? If the latter, can you explain what the final number really represents? Other than being able to compare the W/T number at two points in time, I'm struggling to see how it can be used. Do you have some pointers to more reading on this metric? ThanksSean"