Data Center Virtualization R Us
Allan Leinwand had a provocative blog post on gigaom.com the other day about the imminent arrival of a blade server for the Nexus. Doug ably explained why we would not do such a thing, so I am not going to revisit that, but I do want to explore the question Allan posed at the end of his entry: Who do you think can provide those resources more effectively -– a blade server manufacturer using virtualization with networking added to the system or a data networking manufacturer adding blade servers and virtualization?
Not wanting to let Allan have all the fun, let me take a run at answering this question of who is better positioned to support data center virtualization, and, by extension, server virtualization. If you think about it, data center virtualization is about pulling down walls and blowing up silos–its about giving an application the most appropriate resources at a given moment based on the priorities and needs of the business. Underlying this is the idea of making data center infrastructure appear as homogenous pools of resources–you are able to supply processor cycles to your application or storage to your application without being so concerned about the specifics of the underlying physical asset. Cisco DNA is rooted in concept of delivering ubiquitous access to data center resources. With an AGS+ in my data center I could access a collection of disparate hosts without concerning myself with how those particular systems connected to the network. A few years, our SNA integration strategy (DLSw anyone?) opened up the value companies had locked up in their mainframes to anyone with an IP stack. Fast forward to today and one of the benefits of a unified fabric is providing ubiquitous SAN access to any server. See the common theme here? Expect that trend to continue as we continue to execute on Data Center 3.0. Contrast this with the blade environment, which is, by nature, proprietary and closed. What ever innovation is delivered, no matter how cool, is walled inside a vendor specific, form factor specific silo. Multiple blade vendors? Too bad, you are locked-in. Application requirements dictate rack servers, tower servers, or mainframes? Sorry, can’t help you. You end up in an architectural cattle chute or you end up trying to manage disparate virtualization schemes across your data center. Neither seems all that appealing to me. Finally, when it comes to virtualization experience, the leading blade server vendors today are not delivering server virtualization, VMware, Xen, and Microsoft are. On the other hand, we are delivering network resource virtualization with VDCs in the Nexus, VLANs across the Catalyst family, VSANs in the MDS family, L4-7 service virtualization in products like ACE and the Firewall Service Module, VBS switch virtualization in our next-gen blade switches, heck, even the new ASR is running KVM. Granted I am a little biased these days–although I did start out life as a VAX and System V sysadmin–but getting back to Allan’s original question, while we are not quite there yet, I think Cisco has the shorter distance to travel here….but I am guessing there are some of you out there that might have a different viewpoint…? 😉