Cisco Logo


Data Center and Cloud

As server virtualization continues its takeover, increasing attention is being paid to how we connect all those virtual machines as they zoom around the data center. Because server virtualization breaks the one application/one server model, new tools are necessary to facilitate operations and management. Additionally, the fact that workloads are now mobile introduces new challenges.

Over the years, we have released a number of industry firsts for virtual machine networking, including the Nexus 1000V virtual switch for VMware vSphere, OTV to support inter-DC workload mobility, and FabricPath to better support VM-networking in the data center.

There seems to be a lot of confusion out there regarding the technologies and standards related to access layer technologies, so, for this post, I wanted to dig into the VM-networking and where the related IEEE standards are going. Specifically, I am going to look at our old friend 802.1Q and two emerging standards: 802.1Qbg Edge Virtual Bridging and 802.1Qbh Bridge Port Extension.

There are two basic approaches to VM networking in the access layer. A Virtual Embedded Bridge (VEB) works as an extension of the hypervisor to provide bridging functionality to the attached VMs. The VEB can either be implemented in software or in hardware (for example, a NIC with embedded functionality). The level of sophistication in the VEB can vary. The Nexus 1000V is an example of a fully realized software-based 802.1Q switch, but some VEB implementations may offer only a subset of typical switch functionality including limited features and 802.1 protocol support, minimal management and monitoring tools and even an inability to directly learn MAC addresses. One of the chief benefits of the software VEB approach is that it allows simple, flexible and ubiquitous deployment since there are no hardware dependencies. Every shipping hypervisor today has some sort of VEB support.

VM networking can also be handled by an external switch, where the hypervisor, the VEB, or the adapter directs traffic to an external physical switch for forwarding decisions, policy enforcement, multicast replication, etc. The aforementioned 802.1Qbg and 802.1Qbh standards are concerned with this switch-based approach. As you can see, the approaches are not exclusionary--you can use both a VEB and an external switch.

IEEE 802.1Qbg Edge Virtual Bridging

802.1Qbg comes into two flavors--tagless and tag-based. With the tagless approach, the VEB forwards all traffic to the external switch, which then applies policy and hairpins the traffic back to the server if appropriate--this is called reflective relay. There is no local switching on the VEB with this approach. The chief benefit of this approach is that it leverages the external switch for switching features and management capabilities. The chief downside is increased link utilization and processing cycles from the hairpinning traffic. There is also some additional management and operation actions necessary since you now have both the VEB and the upsteam switch to manage and coordinate.

Reflective Relay can be challenged with certain types of traffic such as multicast traffic, transparent services such as firewalls and load balancers that depend upon promiscuous ports to work properly, and use cases where the upstream bridge cannot determine the MAC address of the end station. To address this, 802.1Qbg specifies optional support for a technology called Multi Channel that defines a new use for an existing tag (S-Tag) to explicitly specify the source and destination of VM traffic. This addresses some of these problems, although it is still inefficient with multicast traffic. Multi-Channel support will likely require new interfaces and switches to implement.

Currently, pre-standard versions of tagless 802.1Qbg are available.

IEEE 802.1Qbh Bridge Port Extension

802.1Qbh also depends on a tagging scheme and specifies a new device called a port extender. The port extender connects to the upstream switch and, from a management and operations perspective, becomes an extension of the switch creating one single 802.1Q complaint switch. With a 802.1Qbh capable NIC in the server, each VM is connected to a virtual port on the switch/extender environment with access to the full features of the upstream switch. While 802.1Qbh will require new hardware, it results in a simpler, flexible, more scalable architecture for VM networking, since it preserves the full functionality of the exiting switching environment without introducing new or additional management requirements.

Currently, pre-standard 802.1Qbh, termed VNtag, is available in a number of products.

What Cisco Has To Offer

Nexus 1000V
As noted earlier, the Nexus 1000V is a fully compliant 802.1Q switch designed for VMware vSphere. Since it implemented purely in software, it will run on pretty much anything that supports vSphere. Because it is a real 802.1Q switch, it avoids many of the challenges of tagless 802.1Qbg. If you truly want a standards complaint option, right now, its your only choice.

Cisco UCS Virtual Interface Card
The VIC offers port extension functionality for the Cisco UCS. The VIC allows VMs to bypass the hypervisor altogether connect to virtual ports on the upstream switch. This approach provides a consistent switching environment across virtual and physical connections. The VIC also supports VMware’s VMDirectPath which allows PCIe devices to be mapped directly to a VM so VM I/O can bypass the hypervisor layer and be sent directly to the PCIe device, which improves throughput, lowers latency and supports vMotion.

Cisco Nexus Switches
Cisco Nexus switches along with the Nexus 2000 Fabric Extender provide a pre-standard version of the port extender and controlling bridge functionality described in 802.1Qbh.

Comments Are Closed

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home