Cisco Blogs

Registering a Sense of FCoE Perspective: Sniffling, Scaling and Storage

January 1, 2012 - 6 Comments

When The Register published a conversation with Brocade on December 8 about the success of their 16Gb Fibre Channel vs. Cisco’s FCoE solutions, you just know that there were going to be several elements that were going to raise an eyebrow or two. Maybe three.

Personally, I found the comparison between Brocade’s 16G solutions and Cisco’s FCoE solutions as something of a red herring. That is, there are different reasons why a customer would want to use one tool in the toolbox versus another, but they were saying “our new jackhammer is better than their entire toolbox.”

Nevertheless, some people felt that the article was an unbalanced promotion of Brocade’s new toys. I was invited by The Register author Chris Mellor to write a response article, which I did, and then waited for it to be printed.

And waited. Chris told me, as I mentioned on The Nekkid Tech podcast, that the editors of El Reg had balked at giving a vendor such a bully pulpit unchallenged. (While I can understand this logic, I can’t help but wonder if the same litmus test was applied to the original piece, but oh well. Such is life.)

Eventually, though, three weeks later The Register quietly published a version of my response article. Given the timing (it was in the middle of the end-of-year holiday week) and the relative sparseness of some of the points I was trying to make, I’m not sure that the conversation got a thorough vetting. After all, false assertions are short, clean and simple. Clearing them up generally takes more energy and effort.


For the record, by and large I like the piece that Mellor wrote. After all, not only did he promote me to being a “boss” (has a nice ring to it, doesn’t it Mr. Chambers? 🙂 ) but he made the amusing impression that I was pouting and petulant with my nose bent out of joint. I mean, come on, admit it: you got that same visual too when you read “FCoE ahead of 16GBit/s FC, sniffs boss,” didn’t you? (I just can’t picture myself stomping my foot in a temper-tantrum without breaking into fits of laughter.)

In any case, there were a couple of elements in the article that deserve clarification and correction. In the process of simplifying some of my points, Mellor inadvertently over-simplified a few points I was trying to make (namely about FC and FCoE purchase trends). At the very least the sources of my information were omitted completely, which could/would leave the impression of a she said/he-stomps-his-foot-in-protest-and-says-“nuh-uh” debate.

To that end, I’m enclosing the full article I submitted to The Register. You will notice some of the quotes again, obviously, but hopefully it might elevate the conversation a bit higher than where it got left off.

After perusing Brocade’s conversation with The Register on 16Gb Fibre Channel, we at Cisco found it interesting and amusing that Brocade was spending a great deal of time waxing prosaic about our success with Fibre Channel over Ethernet (FCoE). We found several curiosities in Brocade’s assertions; not the least of which were the apples-to-oranges comparisons.

Now, when people get overly enthusiastic, they also tend to get a little sloppy with the facts. While Brocade should be congratulated on a successful product launch (after all, a lot of time, effort, and engineering brilliance goes into the development of milestone technologies – especially in storage) it seems that their enthusiasm far exceeds their sense of perspective.

It is odd that in order to support the claim that Cisco has made a mistake with FCoE, Brocade’s numbers are used as evidence. When you actually look at broader sets of numbers, the image sharpens into focus. According to Crehan Research’s latest Market Share and Forecast for Q3, 2011, Brocade has sold just under 60,000 16Gb FC switch ports. In the same time period, Cisco sold almost 275,000 FCoE ports: a 4x increase. The trend evidently matches the bigger picture as well. For the Q3 CY2011 total SAN switching market (FC + FCoE), Dell ‘Oro reports Cisco’s Q-Q share is up 6.3% while Brocade’s is down 6.2% – the same quarter they released 16Gb – a declining total SAN revenue of which Brocade claims 16G is 18%.

Perhaps Cisco is being held to a different standard, though. Analyst Aaron Rakers of Stifel Nicolaus makes the assertion that Cisco is up to 2 years behind Brocade, but it’s not clear: 18-24 months for what, exactly?

Evidently he’s referring to 16Gb networks. However, at the moment we are still several months away from creating an end-to-end 16Gb Fibre Channel network, from servers to storage. Why? Because there are no 16Gb Fibre Channel storage devices currently available.

On the other hand, customers can deploy FCoE right now, end-to-end if they wish, through top-of-rack (TOR) switches to Director-Class core FCoE switches to FCoE-based storage arrays from a variety of vendors. If you want it, you can have it right now. Or, conversely, FCoE can be inserted at the access/edge, in “zero-hop” pods, or in a wide variety of possibilities.

Ultimately, having something available right now versus next year does not sound like it’s lagging behind; in fact, 16Gb FC has a lot of catching up to do to FCoE in terms of use cases, feature sets, and deployment scenarios.

Each of these technologies is a tool in the data centre toolbox. FCoE – which implies consolidated networking traffic – appeals to a broad customer need. FCoE involves the ability to be flexible in deployment topologies, promoting agility for controlling bandwidth and reducing inefficiencies within the data center. Considering 16Gb FC is only 33% faster than FCoE, it’s not even about speed, since most storage customers do not even push more than 4Gb of throughput anyway: TheInfoPro’s Storage Wave 16 Preview with 151 enterprise respondents showed that 24% indicated they were going to wait until after 2013, and another 62% indicated they weren’t thinking about transitioning to 16Gb at all.

If you take these messages from the marketplace, Cisco isn’t falling behind. Brocade is pushing technology that is far ahead of the majority of customers’ needs.

Most storage customers indicate that “speed” is not as important to them as flexibility. Moreover, since FCoE is more of a ‘bolt-on’ technology to existing storage environments, and Brocade’s 16Gb Fibre Channel requires rip-and-replace with new chassis; customers appreciate being able to prolong their investments whilst seamlessly accommodating future growth.

Finally, this is much more than about the numbers, it’s about overall vision and strategy. To that end, Rakers seems to miss the big picture. He is mixing-and-matching his technology strategies. He claims that Cisco’s “approach conflicts with the company’s model collapsing the aggregation and application layers,” confusing storage with Ethernet LAN and server products, which have radically different design deployment paradigms.

On the contrary, Cisco’s approach to storage has always been to provide choice – without forcing customers to choose. Cisco addresses converged networks from the approach that storage customers should not have to sacrifice design, architecture, or network visibility merely by changing the underlying wire. In fact, it is because of the fact that Cisco takes a standards-based convergence model that customers are capable of traditional storage network design whilst simultaneously pursuing Ethernet best practices.

While it is true that Cisco does not currently offer 16Gb Fibre Channel products, by any measure – deployment options, product choices, port count, market trend, etc. – it would seem Cisco’s “bet” is a sound choice for customers.

There you have it. Ultimately, the number of ports FCoE Cisco sells versus the number of 16Gb ports Brocade sells says very little about whether or not the technology is best for your situation. What is key is that when you implement a solution it fits the problems you are trying to solve, not because it makes one company’s or the other’s sold port-count numbers rise.

In all seriousness, though, the objection that I have is that by stating that Cisco made a “wrong bet” with FCoE, The Register was implying that customers would be following suit. Given that I’ve been talking to dozens of companies who have deployed FCoE at both the access layer and multihop, and who are very happy about their deployments, this assertion could not (and should not) have gone unchallenged.

The reality is very simple: many customers want to capitalize on investments they have already made. They find that for the most part they don’t need higher speeds, they need more ability to use the speed they have in a better way. That means more efficiency, more flexibility, and more agility over the long term to maximize their bandwidth needs without needing to recreate the wheel.

That, I think, was the point that was truly missing from The Register article. So, there you go. Fixed that for you. 🙂

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. “Cisco sold almost 275,000 FCoE ports”

    True statement in every sense. Let’s peel the onion back just a bit.

    If you dig a little deeper into the Dell’Oro report, you’ll read that the 275k + port count is counting ALL ports ‘capable’ of FCoE. What does this mean? Every Nexus 5k switch Cisco sells is being counted as FCoE, even if it’s being used as a top of rack 10G switch, which based on my experience is about 99% of the deployments out there.

    ‘Creative writing’ all around this topic eh?

  2. Looking at the value of 16Gb FC, it has three applications:

    16Gb host connectivity. May be useful for very high-end systems which today use multiple pairs of 8Gb FC. Either large DSS/DW databases, or OLTP systems where the IOPS exceed the capacity of 8Gb FC (but this latter use case is unlikely). However, in DSS/DW two factors are impacting this: local storage (i.e., Hadoop clusters, DSS/DW “appliances” like Greenplum DCA); and scale-out NAS (i.e., Isilon). For the former, lots of local 6Gb SAS drives are the answer. For the latter, the next step is 40Gb Ethernet. There is little use case today for 16Gb host connectivity, and it will likely compete with 40Gb Ethernet based storage access (NFS and FCoE) in high-bandwidth applications.

    16Gb storage connectivity. There is no 16Gb storage. No point here. Also, the only real use case here would be either massive disk based system (i.e., VMAX), or a pure flash based system—and by flash it would have to be a Violin Memory type system, not just SSDs in regular drive trays connected by 3Gb or 6Gb back end SAS loops. 16Gb native storage is not here yet, and will likely be first leveraged by esoteric flash arrays. But these arrays may also drive 40Gb FCoE adoption.

    16Gb ISLs. Sounds good. Cut your ISLs and cables in half. But for DCX, Brocade uses proprietary ICLs, not ISLs, so there really is no use case other than ISLs connecting Brocade 6510 fabric switches to a DCX. But the cost of reducing structured ISL cables vs. more expensive 16Gb infrastructure (switches, transceivers, etc.) would have to be considered.

    As you point out, 16Gb FC and 10Gb FCoE solve two different problems. One is about convergence an reducing the number of managed networks, the other is about native storage performance and maintaining separate dedicated networks. That is why all major blade server vendor have a FCoE based converged network offering. One of those, IBM, has Cisco, QLogic, and Brocade FCoE solutions, and another, Dell, only offers FCoE from Brocade. Outside of blade servers, Mellanox has numerous FCoE products, and even Juniper’s QFabric supports FCoE at the access layer. FCoE is a feature of Ethernet, not an alternative to native Fibre Channel.

    As for all of the hot air about Brocade’s 16Gb success, most of this is likely 16Gb capable ports populated with lower-cost 8Gb transceivers. This is typical of any new speed bump in FC.

  3. I’m curious, and maybe my math is off. Lets assume that the 16G array exists. Then I rip-and replace my san for 16G. Then for redunancy I have two 16G cards in my server… What server could push that? (What server can really push 2x8G)…

    Or lets say my array has 32 16G ports on it. That is 512G per second… Getting there with spinning media would be hard, so I’m left with SSD’s.. So I’ve got my new 16G network, that no server can push, with 16GB links to my array with uber expensive SSD’s that offer performance I don’t need…

    Sounds like a technology that serves little purpose.

  4. Very well put. I can see where as you stated “their enthusiasm far exceeds their sense of perspective”. The Register obviously needs to evaluate their role of journalism and create a more open environment for discussing and reviewing the facts regarding networking technologies.

    Thank you for the facts, I hope to use this article for some good in the near future.

    -David Kubica

  5. 16G FC is faster than 10G, its true; it seems to build a railway that supports trains with 800Km/h speed when the fastest train traveling at 500 Km/h. Is it this useful and a good investment?

    I would also look to a near future when 40G and 100G DCB ports will be available. Does it make sense to talk about 16G or 32G FC?
    The scenario looks like more than 10 years ago. In those years the Ethernet speed was 10Mb/s and token ring 16Mb/s, the speed of Ethernet was in transition to 100Mb/s and meanwhile niche vendors proposed the High speed TR alliance: the TR at 32Mb/s. Who was the winner?

    In order to be successful with the FCoE adoption we have to share with Customers that there isn’t a match “FC vs. FCoE”; FCoE is just a more economical and efficient way to move FC frames.

    What about FCoE vs. NFS? Easy, we support both and “No Technology Religion” is part of the Cisco Culture.