Day 2 – EMC Chad Sakac and Dave Graham, VMware Guy Brunsdon, Cisco Chris Hoff and Omar Sultan share some of the major events at VMworld 2009
So, since we initially demo-ed an inter-data center VMotion solution and CiscoLive!, we have been working dilligently with VMware to develop and refine the solution. Inter-data center workload mobility has a lot of moving parts involved. Essentially, you need to be able to address three areas:
- Mobility at layer 2
- Mobility of the data, since there is seldom value in moving the workload if it loses access to the data it needs
- Mobility at layer 3 and of services
The work with VMware to date has focused on formalizing solution requirements, establishing a solution roadmap and developing a reference architecture. Work has progressed to the point that we can jointly publish a reference architecture for for the first phase of the solution, which addresses both layer 2 mobility and data mobility.
The joint solution essentially allows you to cluster data centers that are up to 200km apart and move virtual machines between them as if they were part of the same vSphere cluster. The solution lends itself to workload mobility amongst data centers, as well as simplifying consolidation, migration, and maintenance. Paired with Storage VMotion or active/active replication storage models, the Cisco/VMware solution helps customer implement a significantly improved disaster avoidance strategy. Looking at the jointly validated architecture below, one of the cooler things to point out is that, for Cisco customers, at least, the solution builds upon the gear they probably already have in place.
For more details, Cisco and VMware have jointly published a white paper the details the solution criteria, the validated design, and the testing results. For those of you that are here at VMworld 2009, check out session TA3105 this Wednesday at 4:30 or simply drop by the booth (#2118) for a demo.
Interesting analogy I saw today in story from Byte & Switch blogger Frank Berry. He used the analogy of SUVs and their rather spartan initial models (can you say “early Ford Bronco” or “Cash for Clunkers”?) to the recent unified networking (in Cisco-speak “unified fabric“) offerings — the converging of traditional data and storage traffic over a single low latency and high performance 10Gb Ethernet pipe. I’m not sure if today’s unified networking offerings are quite as stripped down as a 1980s Bronco, and I’m sure the energy efficiency is much better. In fact, they may be much closer to an early 2000s Toyota Prius…
Pretty much any major trade show is an exercise in barely controlled chaos and VMworld 2009 is no exception, but all the pieces are falling into place and I am looking forward to a good show next week. Last year, with the announcement of the Cisco Nexus 1000V we focused on what was possible. This year, with both the Nexus 1000V shipping and Cisco UCS shipping, we will be talking more about what is doable. The common theme across our demos, sessions and labs is practical solutions you can take home today. So, in terms of the must see list:
- Register for the Nexus 1000V Self-Paced lab and get hands on experience with basic set up and more advanced things like troubleshooting and security features.
- We have a number of demos in the booth covering UCS, Unified Fabric, Nexus 1000V, Accelerating VMware View, and one that I will be writing more about next week: inter-data center VMotion
- You should also go check out the big honking UCS deployment 16 racks, 512 blades, humming along–I’ll be writing more about that too next week pretty impressive stuff
In addition, we have a strong collection of speaking sessions:
Since I am apparently feeling a bit nostalgic about VMworld and all the frenetic activity we had about this time last year, getting ready for the announcement of the Cisco Nexus 1000V, I caught up with some of the original players that brought our first softswitch to market.
Saravan is a Director of Engineering within the Server Access and Virtualization Business Unit at Cisco and has been leading the Nexus 1000V engineering organization and product strategy from its inception. In addition to Nexus 1000V, Saravan is currently focused on Cisco’s Data Center, Virtualization and Cloud networking solutions.
Michael is a Distinguished Engineer within the Server Access and Virtualization Business Unit at Cisco and was one of the inventors of the original Nexus 1000V concept. His current focus is on Cisco’s efforts related to data center, server virtualization, and cloud computing.
Paul Fazzone is a Senior Manager, Product Management in Cisco’s Server Access and Virtualization Business Unit and one of the original developers of the Nexus 1000V concept. Paul currently manages all of Cisco’s data center access layer software strategy across the Nexus portfolio.
The interview provides some intersting insight into how we moved from customer “asks” to a shipping product:
OS: What was the initial driver behind the Nexus 1000V? SR: We noticed that the edge of the network was moving from a traditional access layer switch to blade switches with the introduction of blade servers and with the introduction of virtualization, it was moving to the virtual switches in the virtualized servers. To provide rich end to end networking solutions, we wanted to develop a presence in the new “edge” of the network and hence started working on Swordfish (later renamed to be Nexus 1000V). MS: We originally ran into this problem when discussing security solutions with customers. With current virtualization solutions, traffic can flow between virtual machines without ever touching the physical network. With the network access layer blending into the server, we realized we wouldn’t be able to offer a true pervasive security solution without having a presence within the hypervisor. PF: We noticed in 2005/2006 that customers were starting to embrace server virtualization in small pockets for non-production applications. The server teams complained about having to get the network team to trunk vlans to the ESX hosts. The network teams complained about lack of visibility and management to perform troubleshooting when the VM couldn’t be accessed. The security teams were raising red flags because the virtual network infrastructure couldn’t be secured like the physical. We saw these 3 items really impacting customer’s ability to virtualize large portions of their server workloads and we thought a more intelligent and feature rich software switch implementation could address the problem.
OS: How long did we spend developing the switch?