Cisco Logo


Data Center and Cloud

Outside of that large, black, monolithic machine in the middle of the datacenter referred to as the mainframe, there aren’t that many servers that require as many network and storage connections as the backup server.  It’s not really sexy, it’s not computing Pi, generally doesn’t run a hypervisor and is bought with one goal in mind, move data. Not just some data, but a lot.  These machines often move all of the data in your datacenter off of disk and onto tape, either real or virtual.  In many datacenters, these backup servers are sometimes the only non-x86 platforms left due to their ability to contain high numbers of HBAs for SAN connectivity and NICs for network connectivity.   They’re like the tractors of the datacenter.

I was talking to a client of mine the other day and they were talking about moving their backup servers from their existing RISC platform to x86 and Cisco UCS was on the table.  The current server had a combination of 14 HBAs and NICs in it for getting to various LAN and SAN networks, to move the terabytes worth of data from various sources into their Virtual tape library infrastructure.  They were using 8Gb Fibrechannel HBAs and 10GbE NICs.  She was looking for a UCS server that had similar capacity in terms of slots and expansion.

I looked over at the various models of C-Series rack mount servers and didn’t see one with that number of slots, but then I realized that I can do better.

Get rid of the HBAs and NICs.

If you look at a backup server, it can be backing up from many different sources and use different protocols to move that data.  For example, it could be reading data off the LAN, and then writing it to the SAN.  It could read it off the SAN and write it back to the SAN.  It’s even possible to read from the LAN and write to the LAN.  All during the same backup window.  So if you had some backup server with 6 HBAs and 8 NICs, if you have a backup job that requires reading and writing to/from the SAN, all those NICs are wasted.  Likewise if you’re performing a backup job that only requires LAN access, all those 8Gb FC HBAs are wasted as well.  Seems like the current model has a ton of wasted bandwidth in it.

 So what if we just replaced all those HBAs and NICs with Converged Network Adapters (CNAs) that would allow the server to communicate via FCoE? Now we just need to provide enough overall bandwidth to the server and can leverage all the adapters.  So let’s use the above example with 6, 8Gb HBAs and 8, 10Gb NICs.

If we look back at J Metz’s blog regarding 8Gb FC vs 10GbE FCoE, we see that an 10GbE connection can carry 50% more traffic than an 8Gb FC HBA due to a more efficient encoding method.  The above server is currently spec’d out to use 4800MB/s (6 x 800MB/s).  However, to get this amount of bandwith with a 10GbE CNA you’d need 4 CNAs, since each CNA can move 1200MB/s.  So we’ve already cut out 2 HBAs, and down to a total of 12 adapters instead of 14.

While a 10Gb CNA doesn’t provide you with any more LAN bandwidth than the 10GbE NICs, there is one advantage.  Flexibility.   Any and all of the CNAs can carry both LAN and SAN traffic.  Let’s say we propose using the UCS C460 which has 10 PCIe slots in it.

 

If the server is doing a job where it is reading from the SAN and writing to the SAN.  Instead of being limited to 4800MB/s of bandwidth, the server could leverage all of the CNAs to read and write from the SAN.  You could now leverage 10 x 1200MB/s or 12,000MB/s for that SAN only job.  Likewise for a network only LAN to LAN only job you could leverage 10 NICs instead of 8 as before.

<added on Sept 11, 2012 to the chagrin of my editors>

I was explaining this post to a client today, and it pretty much boiled down to a simpler example with easier to understand numbers.  If a 10GbE NIC can push 10Gb of data and a FC HBA can push 6.8Gb of data, then if you replaced the 8Gb FC HBA with a CNA and then rate limited the CNA to only allow 6.8Gb of FCoE traffic, you’d get the leftover 3.2Gb of LAN bandwidth for free.  Who doesn’t like free? Also, leftovers aren’t that bad either.

<Editors: I’m so sorry, I’ll have the interns get you bagels>

 

Do you even need that much bandwidth going into and out of the server?  Probably not and therefore my client could reduce the overall number of CNAs in the server from 10 down to maybe 8?  How would they know how many to reduce it to?  Measure the bandwidth usage on the existing server.

But what if you needed to connect to all these existing FC networks?

That my friends is where you leverage the Nexus 5000 as your access layer.  Standardize the server with CNAs and unified connectivity to the access layer and then let the access layer connect to all the various LANs and SANs.  With 48 or 96 onboard ports of FC/FCoE/Ethernet and not including expansion via Nexus 2000 Fabric Extenders, you’ll have all the ports you need to connect to your existing FC SANs and the ability to provide the flexibility to take advantage of the dynamic backup loads that your infrastructure requires.  As an another option, for those of you that leverage the Nexus 7000 for “end of row” topologies, you could connect the server to the Nexus 7000, and then for storage connectivity, connect to your existing MDS switches via FCoE using the 8 Port, 10GbE FCoE module, installed in the MDS, providing solid multi-hop connectivity.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

7 Comments.


  1. So, like ever since the marketing machines spun up for iscsi and fcoe we come to that one single question: does it cope?

       0 likes

    • Thanks for taking the time to reply. I agree that both iSCSI and FCoE had quite a bit of hype in them, and they’re following Gartner’s “hype cycle”, and are in the “Slope of Enlightenment.” However, we’ve seen iSCSI products for quite a while and in the past year we’ve seen quite a few storage array vendors release FCoE capable products. Giving the server the ability to access FC/iSCSI/NFS and general LAN traffic all from a single adapter provides flexibility that we don’t have in the segregated HBA/NIC installations.

         1 like

  2. Why wouldn’t you directly connect the backup server to the MDS?

       0 likes

    • Carlos, Thanks for responding. This is a question that comes up frequently. While the MDS support FCoE connectivity via the 8 port FCoE module, it does not support switching of your non-FCoE LAN traffic, such as web or other TCP/IP traffic. So while your server would have all CNAs, it would be dedicating some of them for storage, namely those that connect directly to the MDS. You’ll get much better utilization out of connecting into a platform that can do both FCoE and LAN traffic such as a Nexus 2000 (FEX), 5000 or 7000.

         1 like

  3. I can also see this on the blade servers. Using a B200 M3, you can get 40G of badwidth to EACH fabric to a server using the mLOM (VIC 1240 with expander card AND/OR VIC 1280). Both of these are CNA’s.
    this way I can carve up traffic anyway I want without extra cables, ports and power usage compared to a rack mount. Until now, you almost had to use a rack mount for these types of bandwidths as blades couldn’t push the traffic. Not true anymore! Do you agree?

       0 likes

    • Scott, This is an idea that many people have been toying around with. The original reason for rack mount, or more often floor standing backup servers was that you needed a large number of PCI slots to accommodate the connectivity and bandwidth. However, given the performance of the VICs, you could very well use it for backups. One area you could use this blade based strategy is with the pod or block based architectures. Where you have self contained UCS+Storage configuration. One could dedicate a single blade to backup all of the VMs and physical machines within the UCS Domain. This would also keep almost all the traffic within the UCS domain itself, and therefore the only storage traffic that would be leaving the block, would be traffic destined for a tape drive or Virtual Tape Library.

      Do you see clients being open to this rackmount to blade consolidation?

         1 like

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home