Cisco Logo


Data Center and Cloud

I’ve read Henry Newman’s article on FCoE and vendor stupidity three times now, and I’m afraid it hasn’t gotten any clearer for me.

Given the nature of the title, “FCoE Gets Lost in Vendor Stupidity,” and given the fact that I work with FCoE on a daily basis for Cisco, can I help but raise an eyebrow at being called “stupid?”

Okay, okay, so he’s not calling me stupid. He’s talking about the nature of the industry as a whole (I think), and he’s talking about what could happen with FCoE adoption if it’s not handled properly (I think), and he’s comparing the lack of object storage as a metaphor for a lack of FCoE storage (again, I think).

This is not to say that Mr. Newman’s numbers aren’t interesting -- they are -- but I just can’t help but wonder how he comes to his conclusion about FCoE given that the entire article discusses iSCSI.

In fact, half of the article is spent lamenting published benchmark tests (something that I agree with, by the way) and goes into fascinating detail about the IPv6 overhead implications of iSCSI (something I also think is valid, by the way).

How the article comes to be entitled “FCoE Gets Lost in Vendor Stupidity,” however, is lost on me. After all, Mr. Newman admits that FCoE is still a relatively new technology, and iSCSI “got off to a very slow start.” He also concedes that vendors have had much longer timeframes within which to optimize their iSCSI deployments than they have had with FCoE.

He seems to lament the fact that there is no Object Storage as evidence for his FCoE argument, but if he’s correct in that there isn’t Object Storage it is not true about FCoE storage. Listed alphabetically, Compellent, EMC, and NetApp have all announced FCoE-based storage devices and support. HP even has FCoE accessibility to their EVA and XP storage through the mpx200 protocol router.

Interoperability is also a main concern of Mr. Newman’s, and rightfully so. Having participated in the FCIA FCoE Plugfests for many years, I can say from personal experience that there is a great deal of effort and work put towards FCoE interoperability. After all, it is in the industry’s best interest to make sure that customers can expect their equipment to work together.

Finally, there seems to be a lamentation that FCoE didn’t “replace” FC storage. This comment confuses me, because it seems that there is an expectation that 1) FCoE is a panacea for the ills of the Data Center (it’s not), and/or 2) FCoE is something different than FC (it’s not). After all, converging networks so that both LAN paradigms and SAN paradigms are facilitated isn’t something that can be or should be rushed. We’re talking about making sure that we can leverage existing deployments while future-proofing the Data Center at the same time.

Trivial? I think not.

If there seems to be some sort of grievance that FCoE has not dominated the marketplace in the short span of time it has been available, well, there’s little that I can do about tempering a mad rush to worldwide dominance. :)

But if it comes down to ensuring success from both a LAN deployment and SAN deployment scenarios, it seems to me that at least this vendor is striving for the intelligent way of deploying FCoE. By maintaining both LAN and SAN requirements at each stage (tier, or layer) in the Data Center, we are providing customers a greater choice without forcing them to choose.

As of this writing, true Multihop FCoE has only been available since late 2010, and Director Class Multihop FCoE for less than a month. It seems to me that discussing FCoE adoption in terms of “dramatic changes” in the data center in that time frame might be just a wee bit premature.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

5 Comments.


  1. J,

    I really question worrying about IP V6 overhead on iSCSI. i can’t see why anyone would run iSCSI over v6 for the next several years especially in the flat un-routed nets where FCoE could run. Yes eventually iSCSI routed over the public internet will have to be v6 but iSCSI routed over the public internet? WTF?

       0 likes

    • Hey Howard,

      Glad you picked up on that. I wouldn’t say that I’m “worried” about it, but the latency overhead of iSCSI does happen to be a concern for some of our customers. I think that IPv6 configurations (of which I confess I am not an expert!) on an Ethernet design side might have storage implications that I haven’t thought of yet. I just think it’s worth taking a look at the possibility for unintended consequences so that I don’t get blindsided. :)

      To be honest, with the government regulations that are creeping into IT, I think it’s a good idea to be able to get a good handle on what IPv6 means, even if currently people are not considering doing iSCSI storage over those kinds of distances.

      J

         0 likes

  2. I really do not understand the constant comparisons between FCoE and iSCSI. FCoE is a method of accessing existing FC infrastructure via a converged enhanced Ethernet server access network. Everything about FCoE is about accessing existing, native FC networks with enhanced Ethernet networks. Even multi-hop and FCoE line cards in FC SAN core directors is about accessing existing, native FC networks with enhanced Ethernet. FCoE leverages existing FC SAN naming services and existing FC multipathing software. I would add it works both ways, in the sense a new, FCoE capable storage array can access existing, native FC networks for SAN extension, tape access, etc.

    FCoE does not require the purchase of new FCoE storage systems.

    iSCSI is about accessing iSCSI storage (not native FC storage) with existing classical Ethernet (not enhanced Ethernet) networks. Furthermore, the best practice for GigE iSCSI has been to deploy dedicated, isolated, dual, separated, GigE networks for storage traffic (many a Catalyst 3750 have been sold for this, ask any former Equalogic sales person). iSCSI naming services and iSCSI compatible multipathing software must also be included. In other words, iSCSI is not about a converged server access network. And 10 GigE does not “fix” this unless it is DCB Ethernet with PFC and ETS.

    iSCSI requires the purchase of iSCSI storage systems.

    FCoE and iSCSI are two different solutions to two different problems. FCoE allows broader and more flexible access to existing FC infrastructure. iSCSI is an FC infrastructure alternative.

    The second asinine assumption is FCoE either requires, or is “not ready”, until FCoE storage is available, and FCoE either can only, or is best, deployed as end to end FCoE with no native FC. This philosophy again comes from the iSCSI experience. A pure FCoE solution actually has much less value to a customer than a pure FC SAN with FCoE access. With a pure FCoE environment, I lose FCIP, I lose FC tape, and I lose FC based SAN services (EMC RecoverPoint, IBM SVC, etc.).

    The third assumption is the only way to deploy FCoE is to rip and replace all existing FC access (HBAs and FC access ports) with FCoE. The idea of purchasing FCoE CNAs only on new servers, and deploying FCoE incrementally only with new server rollouts, never crossed the mind of these people, even though that is exactly how incremental speedbumps in FC (i.e., 4Gb FC to 8Gb FC transition) are deployed.

    The last idiotic assumption is because Cisco UCS uses FCoE, Cisco UCS is “not ready” until FCoE is “ready”, or that until widely available FCoE management tools are available (whatever those are), Cisco UCS will be hard to manage. This is a profoundly ignorant statement. FCoE is merely a backplane protocol on UCS, and these comments would be the equivalent of saying the same thing about 10GBASE-KR. But I have never heard these “IT Experts” say: “Until Ethernet management tools are available for 10GBASE-KR, customers should not use Cisco UCS, HP C-Class, or Dell M1000.”

    Abraham Lincoln said: “Better to remain silent and be thought a fool than to speak out and remove all doubt.” He must have been thinking about the coming industry press and analyst commentary on FCoE when he said this.

       0 likes

    • If you run FCoE on Nexus switch then you need to purchase additional license for this feature, and it’s not cheap,coz you would at least need two Nexus switches, aka, two sets of additional license, and more SmartNET support cost.

      UCS, I love the concept and tried sell this idea to our server folks, but, the price tag is the killer, I agree with Cisco, if I am going to build a brand new DC from ground zero, yes, UCS might save us money, but how many time in my whole carrier life I would have a chance to build a whole new DC, or even migrate it. Most of the times, we just take baby steps (you just cannot expect enterprise customer to flip the whole DC from one product to another product in just one shot), so we like to have UCS, however the initial one-time investment is just not practical for us, (FCoE license on Nexus, UCS Inter-connect, etc, etc), which forces us go back to our comfort zone, unfortunately.

      I am really hoping Cisco will acquire some storage company in order to turn this around, and forget about those consume video stuff which are just not Cisco’s strength.

         0 likes

  3. @Mark
    Now, this is a great post

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. Return to Home