Cisco Blogs


Cisco Blog > Data Center and Cloud

Could the Converged Network Adapter be the most important element in a Unified Fabric?

Many vendors are touting the benefits of IEEE DCB, FCoE, CEE, DCE, Unified Fabrics, and many other marketing monikers for equipment consolidation.  Each component of the technology is interesting, but maybe no more so than the Converged Network Adapter itself.   Why so?  Let’s take a look.

 

1)   Servers need access to network and storage.  Access to these resources in a legacy environment, is done with separate network interface cards (NIC) and host bus adapters (HBA).  I will refer to these as fabric connections. The more fabric connections required, the more adapter ports are required.  This phenomenon applies to both bare metal and virtualized servers. Adapter ports come in multiple cards and multi-port adapters.  This means you need to buy a server with the appropriate PCI configuration both in the number of slots and slot bandwidth.  Now you have built yourself a server that resembles a physical octopus (or greater).

 

Each adapter has firmware to be tested and certified.  These are generally protocol specific (Ethernet, Fibre-channel, IB, etc.) and are usually managed by separate teams.

 

Ports need to be connected to have any value.  That means cabling, patch equipment, and fabric switching equipment, each of which requires more support, power, management (which requires more ports due to the management network), and do not forget each SFP connection (x2) that are required for cable connections using fiber.

 

Why haven’t I mentioned bandwidth yet?  I have rarely met a customer who told me they were over running the bandwidth of the adapter.  I know, someone is going to say ya, but what about high performance computing and Infiniband.  There are corner cases for everything. However low latency 10GE is addressing a large number of these HPC applications.

 

2)   Is the physical octopus caused by virtualization? 

 

Not really, but virtualization certainly contributes to the issue.  This is recognized by PCI-Sig and the initiatives around Single-root IO Virtualization (SR-IOV) and Multi-root IO Virtualization (MR-IOV). 

 

PCI-Sig recognized that building out the server physical octopus does not scale.  Servers that are too large due to slot requirements, hypervisors forcing – for no good reason – separate switches for traffic isolation, happy cabling vendors, happy switch port vendors, and a power and real estate bill that is through the roof.  The SR-IOV specification provides for physical and virtual functions on the adapter.  The virtual function represents an adapter to the loaded OS.

 

3) Ok, since this is a Cisco blog post, there must be an answer??

 

Cisco’s answer is multi-fold.  First our Cisco UCS M81KR VIC (Virtual Interface Controller) supports IEEE DCB and SR-IOV as part of our Unified Computing System.  That means that we support Ethernet, iSCSI, NFS, and FCoE over the same set of physical adapters.  We support the virtualization of the adapter by leveraging the SR-IOV spec, we identify and secure the adapter by leveraging our VN-Link technology.

 

The Cisco UCS M81KR VIC allows us to build a wire-once access for all storage in the Unified Computing System (UCS).  The days of building a server with separate adapters for Ethernet and FC are over.  It provides a reduction in equipment and cabling by over 60% and lowers power in excess of 20% as it relates traditional deployments.

 

Having up to 128 virtual adapters per physical adapter affords us the flexibility to meet the most demanding server and virtualization environments, and we can even accelerate virtualization and lower CPU %.  This means you do not have to buy a new IO module for every 4 ports.

 

This is done by dedicating one of those virtual adapters to the guest, which allows us to accelerate the performance relative to address translation functions currently handled by the CPU.  Now you know one of the key benefits of VN-Link technologies as proposed by VMware and Cisco to the IEEE standardization.

 

These adapters can be either Ethernet or FCoE ports and are completely stateless in the delivery to the physical server.  Hence the IEEE DCB support along with SR-IOV, all delivered by the UCS Management.

 

4) Parting shot:

 

Any major transition of technologies, however revered the previous generation, leaves you looking back at times snickering at the previous generation.  Which leaves us with the following question:

 

Are you the kind’ve data center architect with the boom box on your shoulder or the IPOD in your pocket (or media streaming from the cloud via mobile broadband)?

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

9 Comments.


  1. Thanks for the info.

       0 likes

  2. Great write up Frank!

       0 likes

  3. Frank-Would it be possible to run data externally to the servers via PCI-e to a top of rack switch/applicance that incorporate your adapters and support both SR-IOV and eventually, MR-IOV?DougDoug,I have seen Gartner and a few startup organizations pushing PCI-e. The Cisco Virtual Interface Controller is a card exclusively for the UCS system, so no extension at this time. The merits of MR-IOV are being looked at as well.

       0 likes

  4. Agreed! I/O virtualization greatly simplifies systems, and increases overall manageability and flexibility. We’ll expect to see more of this in the market.An important addition to having a Unified Fabric is *managing* that fabric as part of a Real Time Infrastructure (see Gartner RTI). A nice expose on managed fabrics is at http://fountnhead.blogspot.com/2009/06/rti-fabrics-not-just-networking-play.htmlVirtualizing the I/O and network is as important as virtualizing the software!

       0 likes

  5. i will install data centerplase help me .step by step design % installthanks

       0 likes

  6. Great overview of the VIC and its value prop. Thanks for sharing

       0 likes

  7. Hii..I am completely agree with you.This is good overview.This is true that Servers need access to network and storage. I have read your blog and this is quite interesting blog. I have been reading along. I thought I would leave comment. I don’t know what to say except that I have very much impressed.

       0 likes

  8. Thank you for the post Frank. Great information.

       0 likes

  9. Hii..I am completely agree with you.This is good overview.This is true that all Servers need access to network. I have read your blog and this is quite interesting blog.

       0 likes