The holy grail of data center networking has been discussed for a number of years and many have attempted to design a single technology to deliver on all of the networking requirements of data center applications. The design goal is reasonably simple: design a single data center transport that can simultaneously transmit IP and Fibre Channel traffic over a single connection. The problem is, it just isn’t that simple. Data center managers expect that the transport must include sophisticated management capabilities to enable accurate depiction of what is happening within the fabric; the fabric must offer high performance and low latency, robust security; the fabric must not -- absolutely ever -- drop a single storage frame. Oh, and it must be able to scale to thousands -if not tens of thousands -- of devices, support 10/100/1000 and 10GE attached servers, support legacy applications, be virtualizable, enable efficient utilization of IT assets, and reduce power and cooling overhead. If it can make coffee, that’s a bonus. OK, the last one is a stretch, but you get the point.It can be seen that although a number of technologies exist that could potentially address the needs of a Unified Fabric, most technologies require significant development to fully address the requirements listed above. If we take InfiniBand as an example, although it has the right performance characteristics, offers IP and Fibre Channel communications over a single interface and reduces power and cooling overhead by reducing the number of interfaces and fabric connections required to support a server, it lacks the scaling, embedded services and management capabilities that data center managers have come to expect from their Ethernet and Fibre Channel infrastructure. It also introduces one significant issue: certification against existing hardware, operating systems and applications. This latter point should not be under-estimated because even if IP-over-InfiniBand is used, hardware and software driver certification can be time consuming, costly and introduces additional complexity. To a certain extent these factors have limited adoption of the technology to high-performance computing clusters and high-performance systems such as those found on Wall Street.One technology does however offer the promise of delivering on the promise of Unified Fabric: Ethernet. Ethernet has proven to be a survivor. It has outlasted ATM, FDDI, 100BaseVG-ANYLAN and Token Ring. It has scaled from its humble origins of shared 10Mbps, to 10Mbps switched and then on to 100Mbps, 1Gbps and 10Gbps -- with 40Gbps and 100Gbps promised in the near future -- without changing the frame format. Ethernet management has also evolved such that DC managers can extract information regarding traffic conditions on the network -- down to individual packets if required -- that assist in troubleshooting, performance monitoring, and forensic analysis. These attributes have made Ethernet the de-facto standard for the vast majority of networked devices -- refrigerators and electric guitars are now available with Ethernet connections.So, where do we go from here? There is a lot of effort going into a proposal called FCoE (Fiber Channel over Ethernet). The protocol draft (more at http://www.fcoe.com) describes a method to encapsulate a Fibre Channel frame in a regular Ethernet frame and it looks like it has wide industry support.Certainly, this seems like an appropriate approach to solve the unification problem but this is not the first time the industry has tried to converge storage and data traffic over a single fabric. The iSCSI protocol has been around for some time but with limited success. Adoption does seem to be increasing but Fibre Channel advocates doubt its ability to deliver the performance and reliability of Fibre Channel SANs.Are there other methods worth exploring? Is it even worth focusing so much effort on solving this problem? As one of my colleagues joked recently, “You can run fresh water and sewage in the same pipe, but why would you want to?” That might be a bit harsh but convergence of data center network fabrics would most likely lead to lower overall operational costs similar to the benefits achieved with the convergence of voice and data networks. For now, it looks like this journey will continue.