Cisco Blogs


Cisco Blog > Data Center and Cloud

A Unified Fabric for the Data Center?

The holy grail of data center networking has been discussed for a number of years and many have attempted to design a single technology to deliver on all of the networking requirements of data center applications. The design goal is reasonably simple: design a single data center transport that can simultaneously transmit IP and Fibre Channel traffic over a single connection. The problem is, it just isn’t that simple. Data center managers expect that the transport must include sophisticated management capabilities to enable accurate depiction of what is happening within the fabric; the fabric must offer high performance and low latency, robust security; the fabric must not -- absolutely ever -- drop a single storage frame. Oh, and it must be able to scale to thousands -if not tens of thousands -- of devices, support 10/100/1000 and 10GE attached servers, support legacy applications, be virtualizable, enable efficient utilization of IT assets, and reduce power and cooling overhead. If it can make coffee, that’s a bonus. OK, the last one is a stretch, but you get the point.It can be seen that although a number of technologies exist that could potentially address the needs of a Unified Fabric, most technologies require significant development to fully address the requirements listed above. If we take InfiniBand as an example, although it has the right performance characteristics, offers IP and Fibre Channel communications over a single interface and reduces power and cooling overhead by reducing the number of interfaces and fabric connections required to support a server, it lacks the scaling, embedded services and management capabilities that data center managers have come to expect from their Ethernet and Fibre Channel infrastructure. It also introduces one significant issue: certification against existing hardware, operating systems and applications. This latter point should not be under-estimated because even if IP-over-InfiniBand is used, hardware and software driver certification can be time consuming, costly and introduces additional complexity. To a certain extent these factors have limited adoption of the technology to high-performance computing clusters and high-performance systems such as those found on Wall Street.One technology does however offer the promise of delivering on the promise of Unified Fabric: Ethernet. Ethernet has proven to be a survivor. It has outlasted ATM, FDDI, 100BaseVG-ANYLAN and Token Ring. It has scaled from its humble origins of shared 10Mbps, to 10Mbps switched and then on to 100Mbps, 1Gbps and 10Gbps -- with 40Gbps and 100Gbps promised in the near future -- without changing the frame format. Ethernet management has also evolved such that DC managers can extract information regarding traffic conditions on the network -- down to individual packets if required -- that assist in troubleshooting, performance monitoring, and forensic analysis. These attributes have made Ethernet the de-facto standard for the vast majority of networked devices -- refrigerators and electric guitars are now available with Ethernet connections.So, where do we go from here? There is a lot of effort going into a proposal called FCoE (Fiber Channel over Ethernet). The protocol draft (more at http://www.fcoe.com) describes a method to encapsulate a Fibre Channel frame in a regular Ethernet frame and it looks like it has wide industry support.Certainly, this seems like an appropriate approach to solve the unification problem but this is not the first time the industry has tried to converge storage and data traffic over a single fabric. The iSCSI protocol has been around for some time but with limited success. Adoption does seem to be increasing but Fibre Channel advocates doubt its ability to deliver the performance and reliability of Fibre Channel SANs.Are there other methods worth exploring? Is it even worth focusing so much effort on solving this problem? As one of my colleagues joked recently, “You can run fresh water and sewage in the same pipe, but why would you want to?” That might be a bit harsh but convergence of data center network fabrics would most likely lead to lower overall operational costs similar to the benefits achieved with the convergence of voice and data networks. For now, it looks like this journey will continue.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. Omar Sultan

    I think there is little doubt that, in the long-term, folding storage transport the existing Ethernet/IP network infrastructure will offer a number of advantages in terms of cost control, simplification of infrastructure and operations, and overall flexibility. The question that will drive blog traffic for some time to come is what is the best way to get there.Having lived through a couple of these network/service consolidations (SNA–>IP and voice–>VOIP), it seems that three things need to happen before someone can be declared a “winner”.First, we need a certain amount of maturity with the technology—do the protocols and hardware have sufficient capability and functionality to serve as a credible replacement. In this discussion, FCoE is newly hatched so it is certainly not there yet. iSCSI is certainly further along, but it is arguable as to whether it is mature enough to serve as a wholesale replacement for FC in the enterprise.Second, we need to build operational expertise around the consolidated network. As with voice and mainframe traffic, consolidating storage traffic is not a “plug-it-in-and-turn-it-up” proposition. Customers will need to build expertise on how to layer storage traffic into their LAN infrastructure while still maintaining the necessary operational characteristics, and that will take time. At some point, it will become clear which protocol is easier to implement and manage in real-world scenarios. I think this is also where we will begin to separate the men from the boys in terms of vendors who can implement a spec versus those who can implement a workable solution.Finally, we need a compelling reason to move. I think this is probably most difficult hurdle to clear. There are no shortage of predictions for the early demise of FC, but, to be honest, in talking to customers, I don’t see it. Most customers I talk to are quite happy with how their FC is working (notice I said “working”, not “costs”). Most folks also have a significant investment in FC. Add to that that most storage folks I have met tend to be conservative by nature (they live in a “lossless” world, after all), and I think there is some significant inertia there. I mean, if I am a storage administrator with a stable, well functioning SAN, why am I going to move—what problem are you solving for me? Instead of investing in re-vamping my storage infrastructure, why should I not take the same dollars and bolster my existing investment? I think, on this last topic, FCoE might have an edge, since, conceivably, it can give customers a more granular migration path.As I noted earlier, I think we are still early in the game, and I am not sure there even needs to be “one winner” at the end of the day: iSCSI, Infiniband, and eventually FCoE may all find their sweet spot in the continuum of customer needs and happily coexist…could happen.iSCSI certainly seems to be poised to take-off with the greater accessibility of 10GbE and greater credibility in general across the board (note Microsoft’s acquisition of String Bean last year). However, I am not convinced about Marc’s comment about the upcoming increases in bandwidth improving iSCSI’s capabilities. To me, that’s akin to the argument that, given sufficient bandwidth, you don’t need QoS. Well, that may be theoretically true, but I have yet to meet a customer that attains that state in the real world. I guess I would have to ask, if iSCSI is so darn cool, why is FCoE getting any traction—if you can get Cisco and Brocade to collaborate on something, there has to be something there. :) I think it is still to early to tell who will win. However since the discussion is not really iSCSI vs FCoE but (iSCSI OR FCoE) vs FC, I think FCoE might get an edge since it is should offer a more granular migration path and be more familiar to the storage folks that are actually going to be buying this stuff.Anyway, since things are far from settled, here are some different perspectives on the topic:Marc Farley’s Blog: http://www.equallogic.com/blog/2007/04/fcoe_run_away_its_the_monster.htmlChuck Hollis’ Blog: http://chucksblog.typepad.com/chucks_blog/2007/04/why_fcoe_works_.htmlHoward Marks’ column: http://www.networkcomputing.com/showArticle.jhtml?articleID=199700581

       0 likes

  2. Unified”" would be Ethernet with TCP/IP and iSCSI. The fencing capabilities in Ethernet/TCP/IP networks using virtual networking and authentication are already far better than anything Fibre Channel has. The statement that iSCSI can’t deliver excellent performance levels is simply FUD, spread from vendors that have a vested interest in maintaining higher margins on FC products. With the upcoming wire speed increases you mention, iSCSI’s capabilities will also increase by leaps and bounds. Besides, the value of a unified network technology is not the ability to turn your network into a mosh pit, but to manage the entire infrastructure with a uniform set of tools that drives down the cost of ownership. FCoE still depends on having dissimilar technologies at the end points. And why would you want to manage two different address spaces and all the confusion that can cause if you don’t have to?”

       0 likes