Cisco Logo


Data Center and Cloud

One more anti-FCoE post , this time from Greg Ferro, who seems to be a supporter of storage over IP.I am not yet sure I understand why people who should support FCoE (i.e. the networking community) are taking side against it, while the strongest supporters seem to be coming from the storage community. I do not even understand why we should take any sides at all, but I think it’s important to clarify the facts and let readers make the final call.There is one piece of the post that is important to correct, and it’s the one that talks to the “reasons not to use FCoE.” Let’s go line by line:* FCP endpoints are inherently costlier than simple NICs -the cost argument (initiators are more expensive)This is absolutely true, but the comparison is not correct. You should be comparing FCoE cards with TOE (TCP Offload Engine) cards or iSCSI HBAs. If you do that comparison, you immediately realize that the cost advantage is on the FCoE side. The iSCSI adapters are priced anywhere between $1,000 and $2,000 and they are mostly 1GE today.With FCoE you can also opt for a software only implementation (much lighter stack than iSCSI/TCP/IP.) If you do that, the cost is reduced because you can use a regular 10GE NIC ($799 list price) to run FCoE with a software stack.* The credit mechanisms is highly unstable for larger networks (check switch vendors planning docs for the network diameter limits) -the scaling argumentThe credit mechanism is a link-level flow control mechanism and has nothing to do with the size of the network. Fibre Channel networks are small because the domain ID field in FC frames is a 8-bit field. This means that you cannot have FC SANs with more than 255 Domain IDs (i.e. switches.) If you use VSANs you can multiply that by the number of VSANs (up to 4000.) I do not think this is a “real” problem. I never heard single storage administrator complaining about the size of their FC SANs; the problems are usually elsewhere.Besides all that, FCoE does not use credits as Ethernet is the transport protocol.* The assumption of low losses due to errors might radically change when moving from 1 to 10 Gb/s -the scaling argumentThere is no “assumption” of “low losses”. FCoE runs on top of a “lossless Data Center Ethernet” network, which is the real differentiator. Data Center Ethernet (as pointed out by Cisco and Dell in previous posts) will benefit any kind of storage traffic over IP, be that FCoE, iSCSI or NAS (CIFS/NFS)* Ethernet has no credit mechanism and any mechanism with a similar effect increases the end point cost.Ethernet has had the standard PAUSE mechanism defined for a long time and you have been paying for it with every single NIC or switch that you purchased over the past few years. We have just never found the right application for using it. 802.1Qbb expands the use of PAUSE by defining PFC (Priority-based Flow Control) which allows network nodes to selectively decide which types of traffic should be lossless and which ones should not.On a more theoretical note, if all of Ethernet became lossless by deploying PAUSE (I am far from suggesting this is the right thing to do) the cost of end points (and switch ports) would be lower because in a lossless environment buffers can be sized more easily than in a best-effort network, where you have to size buffers according to the maximum size of bursts that you would like to support.* Building a transport layer in the protocol stack has always been the preferred choice of the networking community -the community argumentOnce again, very true, but it is not the real point. This is not about the networking community in isolation, but the broader data center community, which includes the storage administrators, who for the better or the worse, do their job with Fibre Channel today and would like to find the smoothest and simplest way to leverage Ethernet without putting their jobs at risk.* The”performance penalty” of a complete protocol stack has always been overstated (and overrated). Advances in protocol stack implementation and finer tuning of the congestion control mechanisms make conventional TCP/IP performing well even at 10 Gb/s and over.True, but I think it is difficult to argue that more protocols are better than less protocols in terms of performance. Always remember that “perfection is achieved not when there is nothing left to add, but there is nothing let to take away.” * Moreover the multicore processors that become dominant on the computing scene have enough compute cycles available to make any”offloading” possible as a mere code restructuring exercise (see the stack reports from Intel, IBM etc.)All true, but once again, not necessary. The issue is not TCP, but the fact that iSCSI is a different beast than FC and the storage community does not necessarily like it (if they did, do you think they are all so “blind” not to see the oportunity with iSCSI; they must be smarter than that!)* Building on a complete stack makes available a wealth of operational and management mechanisms built over the years by the networking community (routing, provisioning, security, service location etc.) -the community argumentThis is a very good argument and I agree with it, but once again, it’s a benefit that has not reasonated well with the storage community so far. We need to keep in mind that storage folks have a job to do and have been doing it just fine so far. iSCSI is better, no argument there, but it’s different and not necessarily simple to understand and use for someone, who is already familiar with FC. Also, it’s one more thing to deal with and people are not just going to rip and replace FC because iSCSI is better. FCoE gives them a migration path and over time might make even iSCSI easier to adopt.* Higher level storage access over an IP network is widely available and having both block and file served over the same connection with the same support and management structure is compelling -the community argumentVery true, but let’s not forget that the large majority of storage traffic is local, where Ethernet represent a unifying transport as much as IP does. Data Center Ethernet is the key enabler for unified fabric, which translates in benefits for all of the storage transports.* Highly efficient networks are easy to build over IP with optimal (shortest path) routing while Layer 2 networks use bridging and are limited by the logical tree structure that bridges must follow. The effort to combine routers and bridges (rbridges) is promising to change that but it will take some time to finalize(and we don’t know exactly how it will operate). Untill then the scale of Layer 2 network is going to seriously limited -the scaling argumentThere are two things to consider here:(1) Data Center Ethernet includes the ability to run alternative topology selection protocols that enhance the scalability of layer 2 domains by ultimately removing the STP (Spanning Tree Protocol,) and(2) To benefit of the FCoE value proposition all you need is an access layer switch (I would suggest the Cisco Nexus 5000 ;-) ) The rest of the infrastructure (Ethernet and FC) remains unchanged.Almost every Fortune 1,000 company has a large installed base of FC storage arrays and SANs. FCoE allows new servers to utilize the existing infrastructure with few cables, adapters, etc- without incurring any performance penalty. It is not a realistic scenario for people to rip out all of these storage arrays and replace them with iSCSI targets, or go through and upgrade each array to make it iSCSI enabled (load new firmware, plug in more Ethernet cards in the array, expand out the Ethernet network to accommodate all the new array ports, etc-).Once again, this is about building a Unified Fabric over Ethernet and allow for the smoohest, realistic transition possible. iSCSI and FCoE should not be positioned as alternatives as they address and solve different problems.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

3 Comments.


  1. Dante,With regard to your point about alternative topology selections for Ethernet, that looks real interesting. Will be much better more interesting when (a) Cisco have decided which horse they’re backing (IETF or TRILL) and/or (b) one of the two is standardised!Your point that … the large majority of storage traffic is local, …”" is valid, but “”local”" != “”L2 adjacent”". It’s perfectly common to have storage >=1 L3 hop away, and such a topology gives substantial increases in flexibility.Having said this, I do see a position for FCoE. I’m much more excited about the other prospects opened up for IPoE traffic by DCE, though!Rgds,Niall.”

       0 likes

  2. I am not anti-FCoE, I am anti-Fibrechannel. After Cisco led the market by being the one of the first companies to release an iSCSI to Fibrechannel router (the SN5420 from memory) in 2000 or so, the takeup of iSCSI has been slow. I do feel that Fibrechannel has delivered far too little for the price we have paid. I have posted a response at http://etherealmind.com.“

       0 likes

  3. FCoE was invented to keep whole FC going. There’s absolutely no other sense in it. Except what Greg pointed to (and he’s 200% correct) there’s another small little thing”" virtually nobody talks about. Nearly all of the modern NIC ASICs have TCP optimized data transfer engines. So if working on channel level (where Ethernet frames live) will give much worser results (both latency & thruoutput) compared to the TCP. My company is selling Windows AoE (ATA-over-Ethernet) initiator and we’ve already hit what’s FCoE guys will see when FCoE will be in the production phase…Regards,Anton KolomyeytsevCEO, Rocket Division Software”

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home