Cisco Logo


Data Center and Cloud

At Cisco Live London 2012, we announced that the Nexus 1000V distributed virtual switch (DVS) architecture will scale to support 10K+ ports across hundreds of servers.  This is a multi-fold increase over our current support of 2K ports and 64 servers.  What is driving the need to scale?  Two reasons: More VMs and broader VM mobility.

The number of VMs is growing leaps and bounds in data centers and cloud computing environments, which in turn is driving the need to scale virtual switch ports.  Depending on who you ask, we have already reached or are about to reach the tipping point where 50% of enterprise workloads have been virtualized.  In most IT environments today, you get a VM by default for computing needs; to run an app on a bare metal physical server requires special approval.  And needless to say, Moore’s Law continues to drive dense multi-core CPUs with extended memory architectures – thus enabling many more virtual machines to be instantiated on a single physical server.  We have seen UCS customers deploy 10 – 30 VMs per server for production workloads, and 50+ (in some cases 100+) VMs per server for non-production workloads and virtual desktops.  Increased adoption of public cloud computing resources, as well as growing deployments of private clouds in enterprises is also rapidly increasing the VM count.  Also, customers often assign multiple vNICs per VM, e.g. a NIC for data traffic, another for management, a third for backup and so on.  These factors are contributing to increased demand for virtual Ethernet (vEth) ports on the Nexus 1000V DVS.

Broader VM mobility over larger domains is another important care-about in virtualized environments.  There are two vectors for VM mobility: broader network diameter and broader server diameter.  The first vector – broader network diameter – is being addressed with networking technologies such as Virtual Extensible LAN (VXLAN), Overlay Transport Virtualization (OTV), and FabricPath. They enable VMs within a LAN segment to move across racks, across PODs and even across data centers, beyond traditional L2 boundaries.  OTV is already supported on Nexus 7K, FabricPath on Nexus 7K & 5K, and VXLAN on Nexus 1000V (Release 1.5.1).  The second vector – broader server diameter – is now becoming important as VM density is driving bigger LAN segments across more servers, as well as the frequent need to move VMs during SW and HW upgrades.  Basically, customers would like the flexibility to move VMs to any DC server where compute capacity is available.

Recall that a Nexus 1000V virtual switch mirrors a modular physical switch with redundant supervisor cards and multiple fabric or linecards. In the Nexus 1000V, the supervisors are called Virtual Supervisor Modules (VSM) and the linecards are called Virtual Ethernet Modules (VEM). VSM’s generally run in a separate appliance, the Nexus 1010-X which is managed by network admins and runs NX-OS. The VEM’s run in the server hypervisor. In this way, a single Nexus 1000V runs across dozens or hundreds of servers, and collectively the VEM’s support up to the limit of vEth ports across this server domain.

Nexus 1000V architecture

Hence, our recent announcement at Cisco Live London 2012 to scale the Nexus 1000V architecture to support for over 10K+ ports. It enables Nexus 1000V to be represented as a single DVS to VM management tool, such as VMware vCenter Server, with 10K+ virtual Ethernet ports spanning across hundreds of servers.  This will provide centralized management of port profiles (for 10K+ vEth ports) and enable VM live migration (e.g., vMotion) to take place across hundreds of servers.  It is indeed an exciting time in virtual machine networking, for the quest to innovate and to scale continues on the Nexus 1000V platform.  Stay tuned for more details to come.

Special thanks to Han Yang and Prashant Gandhi for their contributions to this post.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. Very interesting. Will there be a physical device (router, switch) from Cisco apart from N1kv with the ability to terminate a VXLAN ?

       0 likes

    • Today the only termination for a VXLAN is the Nexus 1000V. The technology is new and we focused on the primary use case for VXLAN with is cloud and scalable data centers, so Nexus 1000V as the initial implementation makes sense. As additional use cases emerge and the technology matures it makes sense to implement it on a broader set of network devices.

         0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. Return to Home