Why Ethernet Wins, Reason #82: OpenFCoE
Over the decades, Ethernet has maintained its role as king-of-the-networking-hill for two reasons: 1) a multitude of companies continue to expand the boundaries of Ethernet’s capabilities to keep it relevant, and 2) the the economics of Ethernet make it hard other protocols to grow beyond niche uses.
Today’s Intel announcement of Open FCoE is a prime example of this. Over two decades, we have taken Ethernet from a “best effort” protocol to one that offers sufficient reliability to carry storage traffic. However, it is the economic implications of this announcement that make this truly interesting.
With the X520 family of family of products, Intel now gives folks a simple, easy path to simplifying their data center by converging data and storage traffic onto common infrastructure. While the attendant cost benefits of a unified fabric in terms of both capex (less infrastructure) and opex (power, cooling, operations) and attractive, possibly the more interesting aspect of this announcement is the risk mitigation and design flexibility this announcement offers.
As customers build out their data center and cloud infrastructure with 10GbE, the choice to move to a converged fabric becomes much simpler and granular endeavor. Because the OpenFCoE stack is a free upgrade, cost no longer is a factor. Because we can support both FCoE and iSCSI, there are no storage-specific constraints today or down the road. With the appropriate upstream switch (hint, hint) OpenFCoE allows very granular adoption. For example, for a given server, you can keep your SAN A connection on Fibre Channel and move your SAN B connection over to the FCoE connection. At this point, you eliminated the costs associated the second HBA, cabling and upstream FC port. Of course, if you move completely to FCoE, you eliminate all the access layer infrastructure costs associated SAN access.