In recent conversations, a couple of customers have asked me why we made the commitment to invest in developing the Cisco Nexus 1000V. Basically, they were wondering why Cisco and VMware would spend the time and resources to create a new product that essentially competes with their existing offerings?So, both Cisco and VMware are heavily committed to the vision of the virtualized data center, but at the same time we both have understood, for a couple of years now, that we needed to address certain practical issues to see the realization of that vision. The percentage of production, virtualized x86 workloads in the typical enterprise environment is generally reported to be in the mid-teens. At the same time, the customers express a desire to virtualize a higher number of workloads and analysts generally expect the number of virtualized workloads to significantly increase in the next couple of years. The caveat is that we must be able to address some of those aforementioned problems. Typically, customers report problems in three major areas: security and policy enforcement, transparency for management and troubleshooting purposes, and organizational challenges. A recent survey conducted by Network Instruments at Interop reinforced this feedback: 55% of respondents said they were encountering problems deploying virtualization. Of that group, 27% identified problems from a lack of visibility to troubleshoot problems and 21% expressed concerns over enforcing security policy. Read More »
One of the goals of Data Center 3.0 is to shift the approach for building data center infrastructure and deploy it in a more targeted and granular fashion to make sure budget spend is used more efficiently. One of the more interesting places to do this is with regards to data center cooling. Taking a more granular approach can certainly help you eke more life out of your data center if you find yourself in a situation where it seems you are running out of cooling capacity. In this podcast, Doug Alger discusses some of the things Cisco IT is doing and considering to map cooling strategies to our different tiers of service. Something else we along these lines is customer who provision cooling based on equipment power consumption indicated by the max rating “on the plate.” This often leads to significantly over-provisioned cooling and sets up a cascade of other problems: 1) wasted budget, 2) perception that the DC is closer to cooling capacity than it actually is, and 3) because the cooling unit are over-provisioned and not running at the designed load, their cooling efficiency suffers. To help customers plan more effectively, we have a couple of free tools on cisco.com (registration, however, is required) that help you understand actual power consumption.The Data Center Assurance Program (DCAP) Best Practices Tool gives you the tested power draw of the best practices designs discussed in the tool and the design guides. Below is an excerpt:To get more detail on your specific configs, you can use the Cisco Power Calculator to plug in your specific configuration and get more detailed info (below is a section of the full report).In both these cases, the typical power draw is significantly less than the “plate rating” indicated by the installed power supplies. As a reminder, while cooling strategies can be designed around these typical usage numbers, electrical service still needs to be provisioned based on the plate rating.
Sometimes it’s more fun sitting on the panel than it is watching it from the audience. It certainly provided a unique opportunity to watch a group of experienced IT vendor technologists and marketeers all claim to be unique, and THE SOLUTION for our 100+ audience members that filled the mid-sized room. Are different WAN optimization vendors’ solution offerings really different from one another? Can they uniquely solve customers’ growing IT and cost challenges in this still challenged economic environment and budget year(s)? Or was this session at Interop just “more of the same thing?”
John Manville, VP for IT for the Network and Data Center Services organization, will be on IPTV tomorrow discussing the production implementation of the Cisco Unified Computing System in our data centers. The event is tomorrow (May 27) at 11am PT. Click here for the event.
As I have often note in my customer briefings, getting a handle on power and cooling issues in the data center is dependent on the proverbial three-legged stool: energy-efficient facilities, energy-efficient products, and getting educated on designing/implementing/operating energy efficient data centers. Of the three, I ultimately think the last one has the greatest impact. As operations teams more sophisticated in their energy efficiency strategies and get access to better tools to measure and manage energy consumption, companies will start get get ahead of their power and cooling challenges. It is for that reason that organizations such as The Green Grid (of which Cisco is a member) were created. If you have not checked out their website, I highly encourage you to do so--there are a wealth of resources there. Anyway, one of the metrics the Green Grid advocates is Power Usage Effectiveness (PUE) and its reciprocal, Data Center Efficiency (DCE) as tools to help customers estimate the energy efficiency of their data centers. I highly encourage you to listen to Doug Alger discuss our experiences in conducting a PUE audit at one of our data centers and some of the caveats he notes on the results of the audit.Anyone else running PUE audits or considering them? Would love to hear any thoughts or feedback you might have.