Cisco Logo


Data Center and Cloud

Last night I posted on FCoE vs. iSCSI. This morning I picked another debate on FCoE vs. Infiniband. The post I have commented to Jerome Wendt’s “Is FCoE a diabolical plot?“I posted the following on Jerome’s blog as a reply, but I want to make sure that Cisco’s blog readers have an opportunity to learn and have an opinion on the topic as well.I do not think that FCoE is a way to lock customers into FC. If anything, it is the other way around. Infiniband has always been faster than Ethernet and FC and for a while it will continue be (it was true for HIPPI, FDDI,… what happened with them?) and InfiniBand has alwayst touted the I/O consolidation value proposition, but that comes at a cost that makes it prohibitive and unrealistic. And I am not only talking of the pure hardware costs, but all of the implementation costs associated with it.I think the point of FCoE is that it can ultimately represent the easiest transition ‘out’ of FC, assuming that is what you want. And I am saying that becasue despite of what you and I may believe in terms of protocol superiority (see also my post of FCoE vs. iSCSI in response to Marc Farley) I still talk to customers, who have no intention whatsoever to move away from FC, hence Cisco continues to have a solid roadmap on the MDS.While I do believe that there will be a role for iSCSI and FCoE in the Unified Fabric, I do not believe there is a role for InfiniBand there. InfiniBand today is required not for its bandwidth, but for its latency. The application of InfiniBand is ultra low-latency high performance computing.InfiniBand is not easy and people know this. Even in HighPerformanceComputing situation, customers try to use Ethernet as much as they can and they only revert to IB when the business case for low latency justifies that.From a Cisco perspective, we do not really push one technology or the other (we offer a solution in every one of those camps,) but I think it’s important to keep the customer’s perspective in mind and if customers ask for simple, integrated, transitionary solution, I think we are honest if we propose FCoE.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

3 Comments.


  1. Why is FCoE taken seriously ? There are fundamental shortcomings in the FCoE protocol, shortcomings that can’t be solved. See also http://www.ietf.org/mail-archive/web/ips/current/msg02325.html

       1 like

  2. Hey Dante,great post. what follows is MY perspective (not my employers) on the FCoE vs. IB arguement.a.) Historically, both IB and FC have been more difficult to manage than competing IP-based solutions (though I’d argue that IQDNs are as much of a pain as node addresses and WWNs). With the advent of truly GUI driven switching solutions, a LOT of the legwork required has been reduced both for FC and for IB. Matter of fact, a true novice to both protocols can get a solution set up and running with a hour or two (cabling and hardware setup included). b.) FCoE does much to assuage some of the performance concerns associated with current generation GigE-based iSCSI by including 10GbE into the equation and quashing latency issues that were terrible with GigE. However, it STILL pales in comparison to IB from a latency and bandwidth perspective. Couple the added cost (and hardware changes required), I see more of a case for using converged switches like Xsigo or Qlogic IB-to-FC solutions for companies that are currently using IB for node-to-node clustering. If they’re refreshing their NOC, then FCoE makes some level of sense but you’re still going to have to provide legacy support (albeit limited to 8 total FC ports to legacy attach). Then there’s the added overhead of managing 3 separate fabrics: FCoE from host to Nexus, FC to legacy MDS units, 10GbE to (hopefully) Catalyst 6500 series frame for IP. ouch! Couple that with the absolutely abysmal power req’s per CNA (24w for 1st gen) and incremental cost. You buy now, you’re going to be wanting to move to Gen2 pretty quickly. c.) regarding the bandwidth vs. latency argument. Sure IB can work both ways. IPoIB proves that inherently and companies like Xsigo and Qlogic have proven that you can have your cake and eat it too when it comes to fabric conversion. SDR/DDR/QDR Infiniband has its place in HPC, to be sure, but moving it out from there, you can realize less CapEX/OpEX from using HCAs and these converged routers than overhauling to FCoE and CNAs.anyhow, those are my thoughts. ;)cheers,Dave Graham

       2 likes

  3. Well I wonder what guys from http://www.mellanox.com would say about that. This article sounds little bit like sentense from I don’t know what guy, who says something like this: personal computers will never be used in common household”";-)”

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. Return to Home