So VMworld Las Vegas is now in the rear-view mirror, and VMworld Copenhagen looms in the distance. Were you there? Did you get a chance to check out our VXI demo? If your calendar was packed, but missed the VXI breakout session on Monday, I’d encourage you to check out the online replay of #SPO3989 (Cisco VXI: Optimized Infrastructure for Scaling View Desktops) once it’s available. We also had some important announcements speaking to our joint innovation with VMware, captured here.
During the show, we carved out some time to do this roundtable (under the Cisco Daily Blogger Techminute) including Dave Kinsman (WWT) and Chris Gebhart (NetApp) re: perspectives on VXI, FlexPod and VMworld 2011 – check it out!
I’m also excited to share these solution guides which NetApp published this week, speaking to the scalability of the FlexPod solution in support of VMware View desktops. When you get a chance, be sure to download the NetApp and VMware View 5,000 Seat Performance Report, as well as this 50,000 Seat VMware View Deployment Report, and the NetApp and VMware View Solution Guide
So how is it that Cisco and its partners can drive this kind of scalable performance for View desktops? There are several key answers to this, but an important part of it lies in the patented differentiation Cisco is delivering with UCS, using a purpose-built-for-virtualization architecture, perfectly suited to scalable high-performance virtual desktops. So, on this note, I want to focus on one such proof point which we talked about extensively this week at VMworld – enabling QoS for virtual desktops.
QoS for Virtual Desktops, or Wild Wild West?
As user workspaces are increasingly consolidated within the virtualized data center infrastructure, IT admins are now having to think about governance of the virtual machines (VM) these desktops reside on, amidst other virtualized workloads. We’ve been saying for a while that desktop workloads have unique needs that ultimately impact end user perception of performance, and so many projects stay mired in Proof-of-Concept land, because end users have been less than impressed with the virtualized version of their desktop. So let’s look at an important example of what Cisco and VMware have done to improve the server-network I/O performance for virtual desktops.
Let me introduce you to the Cisco Virtual Interface Card (VIC). This mezzanine card, providing up to 8x10GbE ports, designed for use with Cisco UCS B-Series Blade Servers, can enable higher performance and scalability for your virtual desktops, supporting up to 256 PCIe interfaces (dynamically configurable as either vNICs or vHBAs). It supports Cisco VN-Link technology that provides network visibility to virtual desktops, enabling persistent enforcement of network and security policy, with full vMotion support, and offers your IT administrations a familiar and consistent network operations model for both physical and virtual environments. The VIC also supports Cisco Virtual Machine Fabric Extender (VM-FEX) technology that extends the fabric interconnect ports directly to virtual machines. This “Hypervisor Bypass” enables virtual desktops to directly access the adapters, improving memory performance and I/O bottlenecks. This capability simplifies your virtual infrastructure by eliminating the overhead of the hypervisor’s embedded software switch, while providing tight integration between UCS Manager and VMware vCenter Server. If you want to get a great walk-through of UCS Networking, you should check out these great videos created by Cisco’s Brad Hedlund.
How does all of this increase virtual desktop performance? With the Cisco VIC, you can now implement advanced QoS for virtual desktops and the virtual adapters (vNIC’s and vHBA’s) they’re connected to, leveraging a number of key capabilities. One of the VIC’s capabilities includes assigning classes of service that provide guarantees on minimum amounts of bandwidth under congestion and maximum burst bandwidth as well. For example, you can take a Virtual Desktop Port Group, and assign it a “platinum level” Class of Service (COS) offering 30% minimum bandwidth (under congestion), on a 10GbE link between the server and the fabric interconnect.
If you’re not sure why any of this is important, ie: “why can’t I just let my virtual desktops and other VM’s “have-at-it”, Wild Wild West style”, with no governance, think about this. Ultimately the loudest bandwidth hogs will win out, over the quieter ones. If you have a consolidated infrastructure, this can mean virtual desktops sitting amidst web or mission critical app servers. It’s not going to be pleasant if any one of these constituencies starves-out bandwidth from the other. Now throw in vMotion traffic and the fur really starts to fly! Implementing governance through QoS for the different VM’s on your infrastructure helps increase overall performance, and ensure your links are saturated, but in a good way J Combine this with the SPAN capability of the Cisco UCS, which allows you to monitor the traffic from an individual virtual desktop, or vNICs, vHBAs, server, storage or uplink ports. This enables much greater visibility of performance, and associated impact to end users.
In a prior blog post , Brad makes a great point about COS marking of VM packets inside the UCS fabric, and extensibility to the physical upstream network, ie: the benefits go beyond the compute infrastructure, as Cisco VIC’s COS service levels can be aligned with the levels used by your network engineers, for example in the case of realtime media traffic. This is increasingly important as you consider the fact that media-rich communications is now converging with virtual desktops. Now your virtual desktop packets that are COS-marked by the VIC are picked up by the extended network infrastructure, which sees that these need to be prioritized according to level assigned (ex: COS-5, realtime media, x% minimum bandwidth, etc.)
So what did you learn at VMworld Las Vegas? What would you like to see us cover in Copenhagen? There are more exciting announcements on the horizon, so stay tuned!