I had a good chat with a customer the other day about our Cisco UCS solution–they are very intrigued and are bringing them in-house to evaluate. As I was wrapping up, someone asked me an interesting question–they see UCS as a great platform for their VMware environment, but what about the rest of their apps–the vast majority of their apps are not virtualized yet. Great question–luckily, we have an answer for that scenario too–its called…..Cisco UCS. So, while there has been a great deal of focus on the UCS as the über-platform for server virtualization, the reality is that it is also a great platform for your regular workloads as well. Cisco UCS Manager allows you to do bare metal provisioning of each server blade so you can still take advantage of Cisco UCS for all your workloads, not just your virtualized ones. It does not make a difference if you are loading a hypervisor and multiple guests on the blade or a single OS and application. In either case, you can take advantage of the UCS stateless computing model, the extended memory architecture, killer performance and simplified management, cabling and infrastructure. And, as you virtualize more and more of your applications, you can do it, in place, on your existing UCS platforms, such as UCS blade server. Nothing could be simpler.
In the next in our series of mini-interviews with Nexus 1000V customer, we have thoughts from Olivier Parcollet, IT Architect at SETAO. Those of you who went to VMworld Europe might have seen Olivier join Ed Bugnion during his keynote. Thank you, again, Olivier for your time.What was your overall impression of the Cisco Nexus 1000V distributed virtual switch?Cisco Nexus 1000V fills the gaps that existed in virtual infrastructures and allows full control of both the physical and the virtual aspects of machines. The Cisco Nexus 1000V allows greater granularity of virtual machines on the network, an overall view of administration of the network, and allows network administrators to take control of the virtual machines. Finally, even though it is still in a beta version, I was impressed by the flawless operation of the Cisco Nexus 1000V. Read More »
Our own VP of Data Center Solutions marketing, Doug Gourlay, participated in a panel at the Future in Review conference on 5/21, titled “Today’s Networks Need to Embrace Automation”. Moderated by Infoblox’s Greg Ness, and including panelists Richard Kagan from Infoblox, Mark Thiele from VMware, Erik Giesa from F5 Networks, the panel looked at the technical, business and political implications of a move from static to dynamic network infrastructure. The 35 minute discussion is facinating, and well worth the viewing time.
In recent conversations, a couple of customers have asked me why we made the commitment to invest in developing the Cisco Nexus 1000V. Basically, they were wondering why Cisco and VMware would spend the time and resources to create a new product that essentially competes with their existing offerings?So, both Cisco and VMware are heavily committed to the vision of the virtualized data center, but at the same time we both have understood, for a couple of years now, that we needed to address certain practical issues to see the realization of that vision. The percentage of production, virtualized x86 workloads in the typical enterprise environment is generally reported to be in the mid-teens. At the same time, the customers express a desire to virtualize a higher number of workloads and analysts generally expect the number of virtualized workloads to significantly increase in the next couple of years. The caveat is that we must be able to address some of those aforementioned problems. Typically, customers report problems in three major areas: security and policy enforcement, transparency for management and troubleshooting purposes, and organizational challenges. A recent survey conducted by Network Instruments at Interop reinforced this feedback: 55% of respondents said they were encountering problems deploying virtualization. Of that group, 27% identified problems from a lack of visibility to troubleshoot problems and 21% expressed concerns over enforcing security policy. Read More »
One of the goals of Data Center 3.0 is to shift the approach for building data center infrastructure and deploy it in a more targeted and granular fashion to make sure budget spend is used more efficiently. One of the more interesting places to do this is with regards to data center cooling. Taking a more granular approach can certainly help you eke more life out of your data center if you find yourself in a situation where it seems you are running out of cooling capacity. In this podcast, Doug Alger discusses some of the things Cisco IT is doing and considering to map cooling strategies to our different tiers of service. Something else we along these lines is customer who provision cooling based on equipment power consumption indicated by the max rating “on the plate.” This often leads to significantly over-provisioned cooling and sets up a cascade of other problems: 1) wasted budget, 2) perception that the DC is closer to cooling capacity than it actually is, and 3) because the cooling unit are over-provisioned and not running at the designed load, their cooling efficiency suffers. To help customers plan more effectively, we have a couple of free tools on cisco.com (registration, however, is required) that help you understand actual power consumption.The Data Center Assurance Program (DCAP) Best Practices Tool gives you the tested power draw of the best practices designs discussed in the tool and the design guides. Below is an excerpt:To get more detail on your specific configs, you can use the Cisco Power Calculator to plug in your specific configuration and get more detailed info (below is a section of the full report).In both these cases, the typical power draw is significantly less than the “plate rating” indicated by the installed power supplies. As a reminder, while cooling strategies can be designed around these typical usage numbers, electrical service still needs to be provisioned based on the plate rating.