Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco Virtual Machine Fabric Extender (VM-FEX) and Cisco VIC

What is VM-FEX? VM-FEX is the consolidation of the virtual switch and physical switch into a single management point.

This sounds funny to say, but it amazes me how many people still use standard VMware vSwitches.  In the enterprise there are just too many things that can be missed on standard vSwtiches and we need consistency. This consistency is obvious when port group names need to match identically or vMotion will fail. Last time I went through the VMware vSphere: Install, Configure, Manage class we were working on the standard vSwitch configuration, which utilizes some interesting port group failover order setting which include overrides. So, I zipped through my sheet and was waiting for the instructor to ask for answers. After a few other students I spoke up and proceed to explain my complex but accurate vSwitch configuration.

You remember this diagram from class right??

And the override settings?

Wow, what a mess we had with all those separate NIC cards in the server. At that time, our datacenter was using boxes with 12x1GB links per box, plus 2 powercords and 4 FC cables. Blahhh! After the 6th box it just got plain messy.

In that class someone asked the question, why would I do all that mess when I could just put everything on a physical 10GB link and separate it logically.

Now, at the time I thought, “This guy is crazy.” Crazy like a FEX! That guy worked for Cisco. Ok, bad puns aside.

Enter Cisco VIC and VM-FEX

Cisco UCS M81KR Virtual Interface Card

VM-FEX and the Cisco VIC can help solve these issues by bypassing the virtual switch within the hypervisor and providing individual virtual machine virtual ports on the physical network switch. Virtual machine I/O is sent directly to the upstream physical network switch. Yes, DVS and 1000v do relieve some of these issues, but not all of them.

Now, if two virtual machines attached to the same virtual switch need to communicate with each other, the virtual switch performs the Layer 2 switching function directly, without sending the traffic to the physical switch. Traffic between virtual machines on a different physical host still needs to go through a physical switch.

This caused the inevitable “I need these ports trunked” conversation with the networks guys.  This meant the ports needed to have a wide open configuration to support all the overrides and VLANs running on the same physical NICs. All VLANs have to be trunked to every interface on every VMware ESX host. Now, at the time, I was like, so….  I now realize the impact of larger broadcast domains and the benefit and advantages of separating these domains using VLANs.

With VM-FEX the virtual machine’s identity and positioning information is now known to the physical switch, so the network configuration can be precise and specific to the port in question. With the correct VLAN assignment we can remove the need for trunked physical ports since every virtual machine interface has a specific interface on the physical switch.

VM-FEX provides server adminstrators a deployment method that ensures consistency across their ESX or KVM hosts. This tool also provides network administrators with piece of mind and best practice convergence for host based virtualized network implementations.

VM-FEX: Single Virtual-Physical Access Layer

  • Collapse virtual and physical switching into a single access layer
  • VIC is a Virtual Line Card to the UCS Fabric Interconnect
  • Fabric Interconnect maintains all management & configuration
  • Virtual and Physical traffic treated the same

VM-FEX Basics

  • Fabric Extender for VMs
  • Hypervisor vSwitch removed
  • Each VM assigned a PCIe device
  • Each VM gets a virtual port on the physical switch
  • Collapses virtual and physical switching layers
  • Dramatically reduces network management points by eliminating per host vSwitch
  • Virtual and Physical traffic treated the same
  • Host CPU Cycles Relief
  • Host CPU cycles relieved from VM switching
  • I/O Throughput improvements

Two key points to highlight:

  • PCIe Pass-Through or VMDirectPath mode helps increase application performance and consolidation ratios, with 38 percent greater network throughput
  • VM-FEX is supported on Red Hat KVM and VMware ESX hypervisors. Live migration and vMotion are supported

Links:

If you want to know more and go to Cisco Live , don’t hesitate to stop on the Cisco data Center booth ! And follow the has tag #cldc11:)

Tags: , , , , , , , , , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

5 Comments.


  1. Hi David,

    do you know if vMotion works together with VMDirectPath, today? To my knowledge, ESX 4.x does not support that combination. There is a whitepaper [1] about VM-FEX stating that this works in ESX 5.0 (“VMware has introduced the support”, past tense) and even explains the details. However, vSphere 5 is yet to be released, and there seems to be no information regarding the VMDirectPath+vMotion topic on the VMware website.

    Regards
    Damian

    [1] http://www.cisco.com/en/US/prod/collateral/modules/ps10277/ps10331/white_paper_c11-618838.html

       0 likes

  2. David Antkowiak

    VM-FEX support vMotion with ESX 4.0U1 onwards.

    VMDirectPath (hypervisor bypass) supports vMotion with ESX 5.0

       0 likes

  3. Hi

    Is VM-FEX help in over coming limitation of

    “vmware ESX not supporting virtual machine to see Fiber Tape Library”

    by directly assigning a Virtual HBA to VM ?

    Thanks
    siddiqu.T

       0 likes

  4. David Antkowiak

    Siddiqu,

    Thanks for the comment. VM-FEX supports VMDirectPath or Pass-Through Mode which can be used to present a Tape Library (If your manufacturer supports it.) See the VM KB Article: 1010789 http://kb.vmware.com/kb/1010789 for VMDirectPath device attachement information.

    As for Fiber Tape, VM-FEX it not a fabric technology or integrated with NPIV, if that is what you were thinking. Additional NPIV will not support tape libraries attached to VMs at this time.

    KB Article: 1016407
    Configuring tape drives and media changers on ESX/ESXi 4.x addresses support for connectivity types.
    http://kb.vmware.com/kb/1016407

    Hope that helps you plan your tape library integration.
    Thanks!
    David

       0 likes

  5. as a network guy, i am really happy to see that the industry finally realized that the software switches on the hypervisor can’t do the job currently physical switches can do with wire speed and full blown security features. If X86 can does the layer2/3 switching be sure that vendors like Cisco don’t waste their money on ASIC/FPGA and RISC based processor

       0 likes