I recently heard an interesting comment to the effect that “the problem with all this Green data center stuff is that you only get so much by improving the IT infrastructure and the facilities that support them. The real gains come in the application architecture”. From a purist point of view I would agree that this is correct, infrastructure is designed and deployed to support applications and the services therein.However, (and forgive me if I sound like a former facilities guy here) we need to walk before we can fly in my opinion. I’ve done some calculations along with some fellow green collars tech folks at consortium and data center charets we attend that suggest you can get as much as a 40% operative efficiency gain without even touching the applications. Expressed as a percentage electrical efficiency improvement, we’re talking about 1) reducing total power supplies (SMPS) 2) Increasing distribution voltage 3) Improving IT asset utilization through virtualization 4) high efficiency, close coupled cooling with side air economizers 5) efficient UPS and so on. Emerson does a good job of summing up this methodology in their Energy Logic approach.Now if we consider a data center that is roughly 40% efficient (very typical sadly) from an electrical standpoint (including cooling measured in Watts) and you could move that to 80% that equates to a difference of $400,000 of opex spend on a 1 MW data center costing $1,000,000 per year for electrical supply. Is this not compelling?When we look at application architectures it becomes evident there are massive, probably larger gains that can be made than just focusing on the infrastructure. For example looking at an information stream that is generated as the result of a typical web transaction. Most math says it will hit infrastructure of one form or another somewhere between 100-120 times. Lets for argument sake say that each time it traverses this infrastructure it requires 2 Watts on average (per touch point) to handle that stream… We would be talking best case scenario 200 Watts (100 touches x 2 Watts). Doesn’t sound like much until you consider a use case like say ebay…Now if we can take that application delivery model and streamline it to say 20 touches we now are looking at 40 Watts per web transaction (20 touches x 2 Watts) then we’re talking about a 60 Watt delta per transaction, perhaps a 120 Watt if we count all the supporting facilities power requirements (primarily cooling).So in the case of energy efficient application architectures, the scale can be massive. However, there is no metric or monitoring application that can easily correlate Watts to applications. Since anything we do in a data center requires a business case, to me focusing on application architectures only today may be philosophically correct but is prohibitively difficult to show the value clearly.Maybe I’m missing something here but I believe there is a lot of good work we can do by focusing on infrastructure architectures in the near term that will improve operative efficiency and will also set a good foundation for the planning and analysis of more efficient application architectures later. I think taking an “and” and not an “or” approach to efficiency across applications and infrastructure is our best bet.Weigh in here with some thoughts, I’m curious if anybody else is having this “water-cooler debate”?Some additional good thoughts on this topic on Paul Murphy’s blog on ZDNet.
A few months ago, one of our switching competitors started telling customers that our Catalyst switch was going to be put out to pasture, which, of course, was not even close to reality. Well, now it seems that the new rumor is that the Cisco MDS has its days numbered. Again, this may just be wishful thinking on the part of some folks, the the reality is that Cisco is fully committed to the MDS family. The MDS continues to be a successful and critical part of our Data Center 3.0 strategy and we will continue to invest in the platform. End of story.
So, I am watching Captain Ron (hey, we all have our guilty pleasures) and it reminded me of a conversation I had with a customer back in Vegas at VMworld. The customer was giving me some friendly grief around the recent NetworkWorld lab test of the Cisco Nexus 7000 and the high availability features of the platform. At this point, you are probably wondering where Captain Ron figures into this…. Read More »
Microsoft and Cisco have released the public announcement released the public announcement on their joint branch IT solution, Windows Server on WAAS (WoW). As many readers have seen, this solution combines Microsoft Windows Server 2008 (Server Core branch version) with Cisco WAAS WAN optimization, providing a highly flexible, integrated solution for delivering local + centralized branch IT services.Part of the announcement is a 20 minute video broadcast, which includes testimonials from Microsoft and Cisco execs, leading customers, global system integrator and service provider partners, and an industry analyst/expert. Lots of interesting thoughts and commentary, several based on hands-on experience with WoW…. Read More »
Just read a nice write-up here on how the Nexus 1000v will change, or won’t change, administrative job roles in a virtualized environment. Kudos to David Davis on a nice blog post, need to see if I have any Nexus 1000v T-shirts or schwag mae yet… maybe a shirt that says, “Dude’ Where’s Your Switch!” “Where’s your switch dude?” and another guy looking at the server saying, “It’s in there!” I dunno, any better ideas?One minor correction for David, we embedded NX-OS into the ESX Hypervisor- so we have NX-OS on the Nexus 7000, Nexus 5000, and MDS 9500, 92xx, 91xx SAN switches, and now embedded into the ESX Hypervisor with the Nexus 1000v. Probably makes NX-OS one of the most diversely implemented Internetwork Operating Systems ever, and the only Internetwork Operating System that connects LAN, SAN, L3, IPv6, and Virtual Machines. Kinda cool…From an admin role change perspective one design point we had was to meet-in-the-middle. Let the network admin define a series of policies, via port profiles. Then let the server admin choose which policy applied to which workload. Then ensure that policy was mobile and consistent as a VM moves from one physical server to another across racks, across rows, and even across data centers. This would let each admin continue to do what they do today, just do it more effectively with consistent management tools and infrastructure.dg