ESG points out that virtual network overlays are important to building out multi-tenant environments like private and hybrid clouds, as well as overcoming scalability issues in those environments that have traditionally been based on VLANs. As ESG notes, and as Cisco mentioned in it’s ONE announcement, programmability of the virtual networks is what really separates them from classic overlays based on MPLS or GRE tunnels. The Nexus 1000V will achieve this programmability capability by SDN API’s such as OpenStack on top of the Nexus 1000V virtual supervisor module.
An interesting new report has been issued by Forrester Research that provides a great deal of market research and insight into the challenges of the data center network supporting large-scale virtualization. The report provides a representative view about the types of obstacles organizations are facing and where they are making new investments, along with some recommended best practices. As usual, the application services infrastructure is one of the biggest challenges, i.e., how to replicate the layer 4-7 and security services that mission-critical applications require in a highly virtualized or hybrid cloud environment. While servers and networks have largely been virtualized, relying on physical firewalls or application controllers can undermine or limit the beneficial effects of virtualization.
Forrester starts by pointing out what benefits customers are looking for and where they see the greatest growth in virtualization going forward. Over the next four years, Forrester sees 500% growth in total virtual x86 workloads that will be hosted in private cloud IaaS (Infrastructure as a Service), where virtual servers are isolated between tenants, compared to 170% growth in private cloud pools in organizations’ own data centers. Forrester points out that overlooking virtual services can “negate private and public cloud investments”, however. 33% of their respondents indicated that they have difficulty integrating public services with internal virtual infrastructures, with 24% specifically citing “frustration with capability, agility and flexibility of traditional application delivery controllers (ADC)”. (see next table).
We’ve talked about this before but given some of the recent visibility from Microsoft, it is worth mentioning again: our Nexus 1000V offering is integrated into Windows Server 2012 and Hyper-V.
At Microsoft Tech Ed 2012 in Orlando a few days ago this integration work was demonstrated in the Day #1 Keynote. To view Nexus 1000V in action on Windows Server 2012, go to this link, scroll down to and select the Tech Ed Day 1 Keynote’ … the Nexus demo pops up around the 24 minute mark. The Nexus 1000V solutions help to deliver highly secure, multitenant services by adding virtualization intelligence to Windows Server 2012 and your data center network.
After our Open Network Environment (Cisco ONE) announcement at Cisco live!, where we unveiled our strategy for network programmability, Jim Duffy at NetworkWorld had a very interesting article that asks a key question, “What are the killer apps for software defined networks?” While SDN technology is very exciting and holds a great deal of promise, the answer to that question will ultimately determine how quickly it is adopted and by who. The consensus is that network virtualization or virtual network overlays are one of the early killer apps that software defined networks can certainly enable (when coupled with other technologies), which is exactly why Cisco made virtual overlays one of the three solution pillars of its ONE announcement. As I mentioned in my TechwiseTV video on virtual overlays, the primary use case for SDN/OpenFlow research in universities is also campus network slicing or creating virtual network partitions for test and production environments, e.g., to share a physical network. As noted in Duffy’s article, virtual overlays can be done with or without OpenFlow.
In the aftermath of a major launch, after reading the press and analyst coverage of the news, I always ask what we could have made clearer, what could have been highlighted better, or how could we have made the complexity of some of the details easier to understand. One such point that probably could have been clarified is just how “open” the Open Network Environment (what’s in a name anyway?). Specifically, regarding our Nexus 1000V virtual overlay framework, there were some comments and questions about how open and interoperable this overlay framework was, especially compared to other vendors touting programmable overlays. One financial analyst firm even stated that our overlay networks had some great advantages, but only worked with Cisco switches. Read More »
Last week at Cisco Live, Cisco unveiled the Cisco ONE strategy. I won’t go into detail on Cisco ONE in this blog post, there has been plenty of blog and analyst coverage of this elsewhere. One piece of the announcement I would like to talk about is the Nexus 1000V and it’s move to running on Open Source hypervisors, along with OpenStack Quantum integration.
Nexus 1000V on KVM With OpenStack: The Cisco Live Demo
At Cisco Live, we demonstrated the Nexus 1000V on KVM with integration into OpenStack. The demo included both the Nexus 1000V Virtual Supervisor Module (VSM), as well as the Virtual Ethernet Module (VEM). The VSM is a virtual machine running Cisco NX-OS software. For the demo, the VSM was running on a Nexus 1010 physical appliance. The VEM was running on the Linux host itself, which was running Fedora Linux, version 16. The OpenStack version we demoed was OpenStack Essex. We were running Nova, Glance, Keystone, Horizon and Quantum. We also wrote a Nexus 1000V Quantum plugin which handles interaction between Quantum and the Nexus 1000V VSM. This is done via a REST API on the Nexus 1000V VSM.
What we demonstrated was the ability for providers to create networks using the standard “nova-manage” CLI in OpenStack. These networks were then mapped to port-profiles on the Nexus 1000V VSM. When a tenant then powered up a VM, the VM was placed on the provider network, and ultimately had it’s VIF attached to the port-profile associated with the provider network. The network administrator, through the VSM, is now able to see the virtual interfaces attached to veth ports, and can apply policies on them. We demoed ACLs on the virtual ports, to demonstrate a Nexus 1000V feature in use with OpenStack. What the demo ultimately showed was the Nexus 1000V operational model separating network and server administrator in an OpenStack deployment.
Where To Go From Here
One thing we are planning to do around our Quantum plugin is to expose the port-profile concept as an extension to the standard Quantum API. This allows profiles to be managed by our Quantum plugin, and allows for us to provide the ability to expose profiles to users of Quantum via the extension API. One immediate benefit this allows for is a GUI such as Horizon to expose port-profile information back into their UI, allowing tenants to select port-profiles to map to virtual interfaces when powering up virtual machines. Effectively, this would allow for providers to create port-profiles and make them available for their tenants to select when powering up virtual machines. Providers can then control policy on the virtual interfaces on their networks.
The End Result
The result of integrating Nexus 1000V with Open Source hypervisors is allowing for the continued evolution of advanced virtual machine networking onto these platforms. OpenStack Quantum integration allows for the integration of the concept of network and server administrator separation into the OpenStack deployment model. Both of these are ultimately about providing more control, visibility, and programmability for customers. I think this is something customers will be excited about, just as we are excited about driving to deliver this to those same customers.