Cisco Blogs

Multiprotocol Storage

Spring has come again for the storage industry, bringing with it new options in both storage and server hardware itself and the networks that connect them together. The rise of SSD (solid state disk), rising connectivity speeds for Ethernet and Fibre Channel, and a new awareness of the importance of storage from a virtualization standpoint mean that storage is experiencing growth and change. For years, the stand-by for storage networking was Fibre Channel.  Today, Fibre Channel (FC) is still the stand-by for storage networking, but there are more options.  Fibre Channel over Ethernet (FCoE), iSCSI, and the traditional file protocols of SMB and NFS are all viable enterprise-grade options to think about.

Its important to understand that despite sudden array of choices in the storage networking market, it is not necessary to simply pick a proverbial winner and run with it.  Every business has its own business needs and I.T. design goals for the data center and the storage environment contained therein.  Most large data centers today are primarily Fibre Channel environments, with a heavy investment in FC and FC-based storage arrays.  The principles of consolidation and network simplification would state that these large data centers should be converting over to FCoE, based on the management, cable, and capital reductions.  But the reality is far from that easy.

With a large investment in FC, companies simply cannot rip and replace the storage network and replace it with FCoE. Setting aside the huge disruption that would cause to operations, the waste of the existing investment in recently purchased FC equipment simply isn’t bearable.  Then there are technical challenges, older equipment such as main frames that require FICON connectivity, and the testing process that has to happen when a new technology is introduced into a data center environment.

The solution, whether it is moving from FC to FCoE, augmenting FC with new equipment, or moving to iSCSI or a file protocol is carefully planned evolution.  At Cisco we design our products with the idea of planned change at their heart.  This is true of our storage networking products, the Cisco MDS and Cisco Nexus families.  These products allow customers to choose the protocol and network they need to fulfill their needs, including their investment protection needs.  In the case of our FC to FCoE example above, FCoE can be phased in with a combination of MDS and Nexus products while still allowing the FC network to run with the same level of reliability that customers expect.

Variety for storage networking is here to stay and application of one or more of these technologies can improve your data center by cutting down on costs and complexity. Keep them in mind as you plan for the future.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Hi Charles!

    Sorry it took me a day to get back to you, I was booked full yesterday. Thanks for taking the time to write back, love any feedback.

    The comments you heard regarding the essential reluctance of storage administrators being reluctant to try FCoE is at least somewaht inaccurate. Reluctance to change is the very nature of storage becuase of it’s importance. You could level that particular charge at iSCSI or nearly any other technology that effects storage. Storage admins are conservative and with good reason. That’s why change is so slow (comparitively) in the storage market.

    As far as the limitations of FCoE, well this is largely FUD. 1-hop limitation, latency, and standards are charges that were valid against FCoE a few years ago and are not valid now.

    Here’s a few resources on multihop FCoE, by our own J Metz.

    As for standards, FCoE is part of the T11 FC-BB-5 standard. FCoE is a standards-based technology.

    The part about latency guarentee is handled by Data Center Bridging, the data center Ethernet technology that ensures lossless in-order packet delivery that makes FCoE possible in the first place.

    There is a lot of FUD out there concerning FCoE, particularly from vendors that don’t have the FC part of the offering. HP OEMs Cisco and Brocade FC equipment so their commitment to FCoE is understandably less.

    As to iSCSI, I’m a big proponent of iSCSI and have been for years. That being said, despite years of proven field testing it has never caught on in the large enterprise and has been religated to the small and medium business market. I think that is because Fibre Channel has always simply been faster. With 10, 40, and 100GbE, it will be interesting to see how iSCSI shakes out in the market.

    I hope that answers your questions. Take care!

  2. A great article. I went to a hp storage seminar recently and talked to experts about there opinion regarding fcoe and if they had any large deployments? They say the fcoe is currently limited and companies are not so eager to try this technology since storage is an essential part of the infra. A limitatiion of fcoe according to them are:
    1 hop liimitation
    Latency garanties

    They recommend using iscsi.
    What is your opinion on this?