Cisco Blogs


Cisco Blog > Data Center and Cloud

FCoE not just for sale, Cisco Nexus FCoE also in production

I like when I get to write blogs like this one. Over the past six months we have had a variety of opinions expressed, some positive, some questioning, some negative, on the viability of FibreChannel over Ethernet, FCoE. For the past month or so I am pleased to report that the website that hosts all of our press releases and activity knows to us as News@Cisco has been running on FCoE, in production. I am going to invite our IT engineering team to hop on here and comment about what they did, how it worked, what their experiences were, and the results to keep this away from being all about the product and hopefully answer any questions you all may have about it. If there are specific things you would like to know about our implementation feel free to post those under the comments as well so we can be sure to answer your questions.Our IT team is one that prides themselves on being cutting edge- from being the world’s largest user of VMWare based vitualization products for a period of years, to being early with VSANs and Storage Virtualization, to being one of the earliest and highest volume online E-Commerce sites in the world our IT team partners with the business, delivers consistently high service levels, and sets a high bar, so thank you for this one! It is great that not only is FCoE in production, but you have it as the baseline foundation infrastructure for one of our most visible applications! dg

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

3 Comments.


  1. Thanks for the kudos, Doug.The modern enterprise data center is the life blood of any organization today; we’re constantly thinking how to introduce ways of driving more productivity throughout the company, and not having to panic about how we will enable that physically.A number of years ago, as our (formerly separate) storage team was in the midst of converting our direct-attached storage to a virtualized SAN environment, I began the question the sheer amount of separate fiber and switching infrastructure that we were having to deploy to connect to SAN islands and eventually, how to link them together. If we could run real-time traffic like voice and video on this network, why shouldn’t we converge the communications fabric in the data center as well?I certainly wasn’t the only one looking over fiber-utilization statistics, dense patch cable fields, overly complex organizational interactions between server, storage, and network teams, and lots of infrastructure racks and pondering that question… other colleagues of mine in the industry expressed similar questions or even frustration, and I’m happy to see that this collective questioning has brought about the next generation of data center technology!Like we did in the move to telephony, one of the changes we made to address the future of data center services was around our organizational model. To realize this, we initially pulled together our architects for networking, data center, and unified communications services into a single team to paint a common vision for the data center infrastructure. As a follow-on, we then brought together those responsible for engineering, implementing, and operating the data center environment across servers, storage, and networking. We broke down many of the barriers and the time wasted in deploying solutions.We now have implemented our next technical step in the integrated & virtualized data center by deploying FCoE. I’m very proud of this accomplishment and I believe it opens the door to the next generation of IT — there’s much more to come!/cah

       0 likes

  2. Well, we have been in production with FCoE for a month and it has gone well. We have Nexus 5020s at the access layer and Nexus 7010s in the distribution layer. The service layer is provided by Cat6, ASA and ACE. Sure, there are a few items/features that need to be addressed but we see enough benefits to design our next data centres around the Nexus family. One of the main benefits is I can allocate more of the DC power to servers, and the compute field is a DC’s raison de etre. We estimate that we will have over 30% more power available to servers compared to our existing Cat6 based data center design. I recently spoke with two customers who were looking at spending $1B on data centers over the next 5 years. A 30% increase in ‘efficiency’ is a compelling story and a key differentiator. There is some marketing material being produced around our case study. I’ll post the URL when available.We have a 3 year roadmap that maps out the evolution of our DCs. The ‘plan on a page’ was shown in some recent seminar sessions and used at a few customer briefings. We are finalizing some Cisco internal discussions this month. I’ll post this URL when available. m

       0 likes

  3. Would it be possible to use FCoE in order to connect two FC SAN islands instead of using FCIP? Anybody did that with Nexus 5000?

       0 likes