Could the Converged Network Adapter be the most important element in a Unified Fabric?
Many vendors are touting the benefits of IEEE DCB, FCoE, CEE, DCE, Unified Fabrics, and many other marketing monikers for equipment consolidation. Each component of the technology is interesting, but maybe no more so than the Converged Network Adapter itself. Why so? Let’s take a look.
1) Servers need access to network and storage. Access to these resources in a legacy environment, is done with separate network interface cards (NIC) and host bus adapters (HBA). I will refer to these as fabric connections. The more fabric connections required, the more adapter ports are required. This phenomenon applies to both bare metal and virtualized servers. Adapter ports come in multiple cards and multi-port adapters. This means you need to buy a server with the appropriate PCI configuration both in the number of slots and slot bandwidth. Now you have built yourself a server that resembles a physical octopus (or greater).
Each adapter has firmware to be tested and certified. These are generally protocol specific (Ethernet, Fibre-channel, IB, etc.) and are usually managed by separate teams.
Ports need to be connected to have any value. That means cabling, patch equipment, and fabric switching equipment, each of which requires more support, power, management (which requires more ports due to the management network), and do not forget each SFP connection (x2) that are required for cable connections using fiber.
Why haven’t I mentioned bandwidth yet? I have rarely met a customer who told me they were over running the bandwidth of the adapter. I know, someone is going to say ya, but what about high performance computing and Infiniband. There are corner cases for everything. However low latency 10GE is addressing a large number of these HPC applications.
2) Is the physical octopus caused by virtualization?
Not really, but virtualization certainly contributes to the issue. This is recognized by PCI-Sig and the initiatives around Single-root IO Virtualization (SR-IOV) and Multi-root IO Virtualization (MR-IOV).
PCI-Sig recognized that building out the server physical octopus does not scale. Servers that are too large due to slot requirements, hypervisors forcing – for no good reason – separate switches for traffic isolation, happy cabling vendors, happy switch port vendors, and a power and real estate bill that is through the roof. The SR-IOV specification provides for physical and virtual functions on the adapter. The virtual function represents an adapter to the loaded OS.
3) Ok, since this is a Cisco blog post, there must be an answer??
Cisco’s answer is multi-fold. First our Cisco UCS M81KR VIC (Virtual Interface Controller) supports IEEE DCB and SR-IOV as part of our Unified Computing System. That means that we support Ethernet, iSCSI, NFS, and FCoE over the same set of physical adapters. We support the virtualization of the adapter by leveraging the SR-IOV spec, we identify and secure the adapter by leveraging our VN-Link technology.
The Cisco UCS M81KR VIC allows us to build a wire-once access for all storage in the Unified Computing System (UCS). The days of building a server with separate adapters for Ethernet and FC are over. It provides a reduction in equipment and cabling by over 60% and lowers power in excess of 20% as it relates traditional deployments.
Having up to 128 virtual adapters per physical adapter affords us the flexibility to meet the most demanding server and virtualization environments, and we can even accelerate virtualization and lower CPU %. This means you do not have to buy a new IO module for every 4 ports.
This is done by dedicating one of those virtual adapters to the guest, which allows us to accelerate the performance relative to address translation functions currently handled by the CPU. Now you know one of the key benefits of VN-Link technologies as proposed by VMware and Cisco to the IEEE standardization.
These adapters can be either Ethernet or FCoE ports and are completely stateless in the delivery to the physical server. Hence the IEEE DCB support along with SR-IOV, all delivered by the UCS Management.
4) Parting shot:
Any major transition of technologies, however revered the previous generation, leaves you looking back at times snickering at the previous generation. Which leaves us with the following question:
Are you the kind’ve data center architect with the boom box on your shoulder or the IPOD in your pocket (or media streaming from the cloud via mobile broadband)?