Cisco Blogs


Cisco Blog > Data Center and Cloud

Another day, Another Skeptic

It seems that there are a lot of concerns about FCoE and that’s understandable since it is a new protocol and standards are still being finalized. However, there is significant momentum behind it and I’ll try to address some of the points mentioned in this post from Greg Ferro.1. “There are no standards”The FCoE standard will be completed this year. Most of the hard work such as frame format and addressing schemes has been completed and there shouldn’t be any roadblocks left.2. “The year of 10Gb ethernet won’t be until 2010″The year of 10GbE has already come. 10GbE is already widely deployed in the data center and I expect to see it deployed to servers this year. Cisco has shipped over one million 10GbE ports so obviously the market is there and growing rapidly.3. “For FCoE to be successful, you must buy new switches that support PFC, ETS, and DCBX Data Centre Ethernet extensions”Yes, FCoE will run better with Data Center Ethernet enhancements and the standards are going through IEEE right now and should be completed this year. Cisco is already shipping a pre-standards version of DCE today with the Nexus 5000 at a price point of less than $1000 a port. That’s market-leading pricing and certainly not a bomb!4. “You must buy FCoE HBA for servers, and then wait for the drivers to be certified by all the storage vendors”The FCoE adaptors (CNA) have been announced by two different vendors and they use the exact same ASIC technology and driver as their current 4G HBA so certification should be relatively swift. Also, Intel has announced a software implementation on their current 10GbE NIC which should lower cost substantially.5. “Why did Cisco buy Nuova Systems so quickly?”The Nuova acquistion was not a surprise to anyone. Cisco was a majority owner from the start and most of the founders were ex-Cisco after originally working on Catalyst and MDS development. The timing of the acquisition was due to the fact that the product had completed development and was ready to ship. Which it is now by the way.6. “FCOE is a transition technology”FCoE is more than a transition technology. Of course, everyone would prefer a pure solution but there is over $50B in Fibre Channel installed base. You don’t move that kind of infrastructure overnight. A more evolutionary approach is required.7. “ISCSI will move into the gaps”I can’t disagree with the fact that iSCSI will continue to be popular and gain marketshare. But most of the growth is coming from new customers who don’t already have a Fibre Channel SAN. Very few customers are removing their Fibre Channel infrastructure and replacing it with iSCSI. That is where FCoE comes in and is an ideal convergence solution for them.Both iSCSI and FCoE are good for the market. We don’t need separate parallel networks that do the same thing at the cost of additonal hardware, additional cabling, and additional management.Convergence is coming to the data center. Depending on what your starting point is, there is a path for you to get there.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

8 Comments.


  1. Good Evening DeepakI have been researching FCoE from the position of customer deciding whether to invest in this early stage technology. The article is a summary of my current concerns regarding FCoE and is not new information. I have previously been cited in this blog (April 16) as an FCoE sceptic with similar points. Subsequently, I have read Silvano Gai’s book (as he requested), and attempted to learn more about the technology. You make a fine response and good points, but many of your responses point to future possibilities rather than reality. e.g. Qn 1 there shouldn’t be any roadblocks left”” are fine words, but experience suggests that ‘mileage may vary’. I would agree that all evidence points towards a successful outcome for FCoE, so far.While I acknowledge the Fibrechannel installed base and its importance, my opinion is to remain cautious about FCoE until these issues are resolved. RegardsGreg Ferro”

       0 likes

  2. Hi there,Deepak, thanks for this interesting post. I did, however, want to clear up some misconceptions from NoHype’s comment above.S/he writes: what do you expect when Dell is handing out $$ to write posts that help push their positioning?””This is incorrect. While Dell is *sponsoring* The Future of Storage microsite, they have no control over editorial. That is handled entirely by Techdirt editorial staff. Dell is not “”handing out $$ to write posts that help push their positioning.””Dell is sponsoring a site. Techdirt is handling all of the editorial control *and* the payments.The questions being asked of the Insight Community are merely about where people believe the storage market is going — and there are numerous posts on the site that disagree with Dell’s positioning. It’s incorrect to suggest that Dell has any influence over editorial. There are both pro-FCoE posts and con-FCoE posts.If Cisco puts an ad in the WSJ, does NoHype then assume that all WSJ coverage is paid for? This situation is no different.The participants in the Techdirt Insight Community are well known and well respected within the technology community. They are recognizable names with strong reputations for telling the truth. Those are the folks we want in the Techdirt Insight Community — and the reason why companies hire us.We make it clear to any customer that the insight provided by the community will always be what they believe strongly in — which is why we get so many multiple perspectives that often involve disagreements among the different participants. In fact, that’s where the value of the community often lies — in the areas where they disagree.Considering that even Dell’s competitors in the space have taken part (and had their insights posted), it should be obvious to anyone that this is not Dell paying people to spout their marketing positioning.If that’s what a company wanted the Insight Community to do, we would not take the job.Anyway, I appreciate you joining in the conversation, and thanks for allowing me to clear up NoHype’s misconceptions.Mike MasnickCEOTechdirt Inc.”

       0 likes

  3. Deepak Munjal

    Thanks for the clarification that the IEEE standards will take some more time to be finalized.For simple topologies, PFC and jumbo frames will suffice and will not require the full Data Center Ethernet implementation.Initially, most will deploy FCoE only at the server access and still maintain separate SAN and LAN in the core/aggregration layers. When these enhancements are generally available, customers will be able to build larger FCoE networks that deliver a lossless environment from end-to-end.

       0 likes

  4. Another day, another skeptic – of course, what do you expect when Dell is handing out $$ to write posts that help push their positioning?They are specifically looking for FC vs. Ethernet and iSCSI vs. FCoE. I know there are plenty of Dell people who understand the value of giving customers options [upgrade monitor - ADD $199], the new Techdirt Insight has been more ‘dirt’ than ‘insight’.A little clarification – while the T11 FCoE standards should be done by the end of this year, or the beginning of next, the IEEE Ethernet Enhancements are going to take longer. This won’t stop your first generation Nexus 5000 from getting some early deployments, but for a full unified environment, there is still a lot of work to do.Note that many Ethernet vendors are starting to release products that at least support PFC to create the lossless environments needed for FCoE.NoHype

       0 likes

  5. 2. “The year of 10Gb ethernet won’t be until 2010”The year of 10GbE has already come. 10GbE is already widely deployed in the data center and I expect to see it deployed to servers this year. Cisco has shipped over one million 10GbE ports so obviously the market is there and growing rapidly. “”The year for 10GE servers may not come so fast. I believe most of 10GEs today are switch to switch links, not Switch to server. Before 40GE/100GE are widely used as sw-to-sw link, there is no reason for server to jump to 10GE. From an architectural point of view, an access layer switch for server farm should have an (up-link):(down-link) bandwidth ratio of at least 1:2 (better 1:1), as we can’t put all servers in one switch: presentation layer server, business logic layer server and DB layer server have different L4-L7 and security requirement, this makes up-link a bottleneck. If we connect 40 servers with 10GE CNA to a access-layer switch, what kind of up-link do we have? today?thanksxuping”

       0 likes

  6. Deepak Munjal

    You are correct in stating that most 10GbE links are switch to switch and not on the server right now. But that is changing fast as low-cost NICs from Intel (http://www.intel.com/pressroom/archive/releases/20080408comp.htm) priced at $799 become widely available.But even then, most servers won’t be using that bandwidth all the time so we don’t need 40GbE and 100GbE immediately. Until then, we have several technologies like EtherChannel and VSS that will allow customers to bundle several 10GbE links and even double the available bandwidth by removing the need for blocked spanning tree links.

       0 likes

  7. But even then, most servers won’t be using that bandwidth all the time so we don’t need 40GbE and 100GbE immediately. “”The same is true to server-switch link, servers don’t need 10GE immediately. Until then, we can use 1GE NIC teaming for load balance and high unavailability. NIC teaming has same benifit and is more simple than VSS, FlexLink+, REP(Resilient Ethernet Protocol), Rbridge/TRILL and other L2MP. Plus, NIC teaming is available for quite a long time, where VSS is just introduced, Rbridge/TRILL are far away.Is there any DCE or FCoE solution on 1G Ethernet?thanksxuping”

       0 likes

  8. Deepak Munjal

    I’m sure you’d argue that 1GbE is not enough for some servers that need to burst to higher speeds, especially as virtualization continues to drive more I/O per server.So the real question is when is 10GbE more efficient for server interconnect compared to multiple 1GbE links?At some point, it 10GbE will be a more practical solution than multiple 1GbE links and I think that time is coming very soon.As for DCE and FCoE on 1GbE, there’s nothing that precludes it in the standards. However, I expect most vendors to offer these features on 10GbE first.

       0 likes