Cisco live is here and the news and awards around the Nexus 1000V cloud networking and services platform just keep rolling in. Last week we announced the Citrix NetScaler 1000V virtual application delivery controller, which will be sold by Cisco. The Microsoft Hyper-V version of Nexus 1000V was officially released this month, and just like its VMware vSphere companion, there is a free version users can download and deploy. On the heels of its release the Nexus 1000V for Microsoft Hyper-V won the Best of Show in the virtualization category at Microsoft’s Tech Ed Conference.
Awards just kept piling up for the broader portfolio as Nexus 1000V InterCloud, our secure hybrid cloud connectivity solution, won a Best of Interop award in Tokyo this month. And just to show the marketing team is pulling its weight, a Nexus 1000V InterCloud video won a prestigious Silver Communicator award in the Online video/ B2B category.
There’s lots of new stuff to talk about Cisco live as well. If you recall, in February I discussed enhancements that Cisco was making in the Nexus 1000V portfolio to eliminate the requirement in VXLAN for IP Multicast. [Those enhancements are now shipping in a new Nexus 1000V release 2.2, full code string 4.2(1)SV2(2.1).] This new version even supports virtual switching for up to 128 hosts and 4096 virtual ports for greater scalability.
The enhanced VXLAN shown at Cisco live London in February included converting the IP Multicast to multiple known Unicast packets based on head-end replication. While it may not seem entirely obvious that this is a major difference, it does reduce the requirement for an IP Multicast-enabled network core, and it turns out the overhead is really quite minimal. Because of the increased numbers of VXLANs (up to 16 million), there are typically many more of them, with fewer endpoints for any particular VXLAN segment. This reduces the head-end replication to maybe a few copies. Also, with VM density per server increasing, dozens of VMs on a particular VXLAN may end on only a couple of servers, reducing the number of replicated copies sent to multiple VXLAN termination points.
This week in Orlando we are demonstrating another mechanism to scale VXLAN. At the Cisco Live Data Center booth, we are showing Border Gateway Protocol (BGP) control plane across two separate Nexus 1000V Virtual Supervisor Module (VSM) high availability pairs to scale out Enhanced VXLAN even further. Now, this Enhanced VXLAN solution without needing IP multicast in the physical network is leveraging techniques that have helped us to scale the Internet to thousands of hosts. As we have done in the past, we are contributing the specifications for this VXLAN enhancement to the IETF so that the VXLAN community can have a common design for interoperating without multicast. The specific submissions are:
- EVPN over MPLS core – http://tools.ietf.org/html/draft-raggarwa-sajassi-l2vpn-evpn-04
- Network Virtualization Overlay(NVO – VXLAN/NVGRE) over EVPN – http://tools.ietf.org/html/draft-sajassi-nvo3-evpn-overlay-01
In addition we are showing several other demos and near-term Nexus 1000V product innovations at the show, including:
- Nexus 1000V on vSphere integrated with UCS Director (a.k.a. Cloupia)
- Nexus 1000V on Hyper-v integrated with Microsoft System Center Virtual Machine Manager
- Nexus 1000V on KVM hypervisor integrated with OpenStack Grizzly
- Nexus 1000V InterCloud: Secure hybrid cloud
- Nexus 1000V with vPath chaining Virtual Security Gateway, Imperva WAF, and Citrix NetScaler 1000V
UCS Director has always orchestrated, compute, storage, and physical network. The newest demo we are showing is UCS Director automating the creation of a tenant on classic and Enhanced VXLAN with Nexus 1000V, defining port profiles for the applications, deploying the VM’s onto two different Pods (VXLANs) and showing the communication between the VMs. Hence, UCS Director can now orchestrate physical AND virtual infrastructures. Coordination for these steps is handled between UCS Director, the Nexus 1000V Virtual Supervisor Module and vCenter (see diagram).
And here’s the screen capture of a sample UCS Director workflow for provisioning a VXLAN segment on a Nexus 1000V network:
It should be an exciting show, and an exciting summer. As always, make sure to keep up with Nexus 1000V innovations and information through our Nexus 1000V community page (http://cisco.com/go/1000vcommunity) where we continue to host great webinars on all the above innovations and more.
Where is the Layer 3 VXLAN interface for each segment and is that part of the config automated/orchestrated by UCS director as well?
Thanks,
Jason
Jason,
I hope I understand your question, but VXLAN doesn’t care where the Layer 3 interface is in physical network. You can think of it as moving the layer 3 interface to the VXLAN tunnel termination points, which reside in the virtual switch. The physical network is unaware of the VXLAN overlay. And the setup of the VXLAN and coordination with the edge switches where the VXLAN tunnels are terminated are automated through UCS Director. Hope that helps. – GK
Okay, let me clarify. I wasn’t talking about VTEP IP.
Where is the user default gateway (SVI) within the VXLAN you are describing and is that being orchestrated via UCS directed?
The reason I’m clarifying is this statement, “The newest demo we are showing is UCS Director automating the creation of a tenant……deploying the VM’s onto two different Pods (VXLANs) and showing the communication between the VMs.”
Communication between the VMs means there is Layer 3 somewhere because they are on different VXLAN segments. Curious if you are creating the tenant SVI with UCS director or configuring the Core Switch manually (in this example)?
Thanks,
Jason
Jason,
Sorry for the delay in response but Cisco live is crazy this year. And I think I probably did confuse things. In the demo the VMs are on the same VXLAN, not two different VXLANs. When I said the communication was between two different Pods (VXLANS), it should have indicated two different VLANs (but same VXLAN). Being on the same VXLAN obviates the need to do anything in the physical network to facilitate communication over a Layer 3 hop. As such, everything is done from UCS Director as described.
On UCS Director, you can potentially add L3 gateway virtual device as part of the workflow, for north bound connectivity of your workload in the VXLAN segment.
Hi,
We work a lot with OpenStack and would love to test / pilot the “Nexus 1000V on KVM hypervisor integrated with OpenStack Grizzly”!
Thanks!
Alvaro Pereira
Alvaro, Thanks for your interest in our Nexus 1000V on KVM. Please work through your account team and they’ll get you in touch with the Nexus 1000V team as we get closer to beta.