While WAN optimization has long been available to purchase and deploy from network equipment vendors like Cisco, Riverbed and others, the time has come when leading service providers around the world are now offering their customers the strong benefits of WAN optimization technology as a managed service.The growth of managed services overall has been strong over the last few years. Nemertes Research has data showing in 2006 only 27% of their surveyed organizations used/planned to use branch managed services. Today, 63% of Nemertes’ surveyed organizations use or plan to use branch managed services in some locations. Another interesting data point: managed services are out pacing IT industry growth -- 18% for MS vs. 8% for IT industry (source: Ovum). While WAN optimization is relatively early in this cycle, expect to see adoption of this managed service grow, especially among small to mid-size businesses…though some analysts such as NetForecast also see global enterprises . Read More »
We recently turned up FCoE (on the Nexus 5000) in one of our production data centers. Here is an interview with Sidney discussing the results and what we learned during the process.
I recently has the opportunity to sit down with Doug and chat about the process Cisco goes through to forecast and size capacity of our data centers.
We’ve been working to support a large customer over the last several months in driving up operative efficiency around IT Processes. We’re to the point where we’ve helped them benchmark their operative electrical efficiency (using Green Grid’s PUE and DCiE) and we’ve provided them with a Watts per transaction metric. This customer is a retailer so in this case the transaction model is fairly straightforward, input from the cash register all the way to the back end storage in one of several regional data centers.So to build the watts per transaction model, all we did was inventoried the infrastructure that supports the creation, transmission and storage of each transaction and normalized it to provide an average Watts per transaction metric. This allowed us to take the next step which was to analyze each infrastructure set at each stage to determine what “energy overhead” was there. No surprise we found servers vastly underutilized and in branch environments when they didn’t need to be, we found extra switch/exchange points and storage that was geographically “siloed”. So in this case we were able to make a simple set of recommendations covering changes in the IT architecture that reduced the total Watts per transaction by roughly 12% (to be validated as we are also implementing energy monitoring post redesign).So this is a simple case of looking at systems and architectural level operative efficiency. And I can tell you this operation is run well and has simplicity and efficiency as pervasive mindsets in IT. Oh that and they work really well with their facilities department. So, a Watts per transaction model can be infinitely more complex under a different use case. So, fast forward to what we’re looking at next that I’m hoping to get some input on.Has anyone developed or seen what they believe to be a telling (realistically accurate within 1-3%) metric that would apply to application energy requirements? For example, if we look at a typical web transaction (say buying your new Klean Kanteen to get of bottled water) there are estimates that say the information stream that is generated by this order hits separate transaction points ~120 times. In many cases these transaction points also hit different infrastructure sets (i.e. spinning up multiple severs, VM or physical to handle part of the transaction, core, access and storage switching, etc). When we position a streamlined application delivery model with only ~20 touch points there is of course an energy benefit to that.What I’m having a hard time determining is a normalized Watts per application model that can provide an indicative figure across the server, storage and appliance sets that can be correlated to a larger application delivery architecture. The biggest challenge here is not the power required by chips to process transactions, it’s the very high degree of customization we see across different IT architectures, Like the old quote “we want to control the wheel, therefore we reinvent it”.I have some what I would call defensible estimates but wanted to check with you all to see if anyone has seen any work in this space that is compelling. A second question, would an application delivery model showing energy allocation be of interest if we were to publish? It of course gets me all giddy to think of the implications…Thanks, happy Greenin’
One of our goals for our blog is to bring in some different perspectives to you. As you might have noticed, our first foray is with a couple of new folks from Cisco IT: Doug Alger and Sidney Morgan. Both Doug and Sidney live with the challenges of the evolving data center on a regular basis.The initial blog posts lets each guy explain what they do in their own words. This should evolve into a weekly feature-if you want follow-ups on topics or would like us to address specific topics, let us know via the comments feature.Doug AlgerSidney MorganAt this point, I would also be remiss if I did not point out our Cisco-On-Cisco website (cisco.com/go/ciscoit). This is a great website that really digs into how we are applying information technology here at Cisco. You can read about IT’s perspectives on technology and business trends, see how we are deploying technologies and review our best practices-all good info that will help you make more informed decisions about your own organization.