As the Product Manager for Fibre Channel over Ethernet (FCoE), I often get asked some of the hard questions about how the technology works. Sometimes I get asked the easy questions. Sometimes – like two nights ago – I get asked if the standards for FCoE are done.
I’m not kidding.
My own expectations for discussing FCoE were focused around the topics and conversations that we’ve been seeing over the last year, since the last Cisco Live in 2011.
Since that time, Cisco has made some tremendous progress in offering numerous FCoE-based solutions on several of our products, from Multihop FCoE for Nexus 5000, Nexus 7000 and MDS Fibre Channel switches, to improvements in the management tools (Data Center Network Manager) and expanding the number of possible options for using the technology.
Who’s Using FCoE?
I confess I was a bit concerned going into this year’s Cisco Live because I recognize that I have a skewed view of the world of convergence. I mean, each week (on average) I give about 3-5 presentations on converged networks, FCoE, multiprotocol storage, etc., to partners and customers. When you do that many for months on end, you tend to think that everyone is doing aware of what’s happening as you are.
The other side of the coin, though, could be that I’m not necessarily talking with a representative sample of customers. After all, the questions can often be what I consider “first step in understanding” questions. So maybe, possibly, I was just getting the early adopters in my world, not the bulk of the population.
Boy was I wrong. I had completely underestimated the desire to learn more about FCoE at CiscoLive.
In my first session – an 8-hour techtorial designed to fire-hose the audience with the most technical networking implementations – I asked the audience of about 60 people how many of them were getting pressure from their storage teams to prepare for handling converged storage over their Ethernet networks.
90% of my audience had already implemented or were preparing for converged networks.
There were two reasons why this was a huge deal for me.
1. They were asking the right questions. If you were concerned about the impact on your changes in your network that would affect storage, they asked all the right questions about what kind of unintended consequences there could be.
2. This meant that the “Layer 8″ issue was being addressed. People are trying to figure out how to handle the basic trust issues of guaranteeing bandwidth for various traffic types and with the appropriate prioritization. The questions asked throughout the day (not just to me, but to the other speakers) were all spot-on. They wanted to know about how multipathing configurations affect storage, and what kind of bandwidth do they need to calculate if they get sudden bursty traffic for, say, streaming or even Hadoop-like traffic.
In another session, the speaker asked how many of the 100+ attendees were already using FCoE in their Data Centers. More than 75% of the attendees raised their hands.
Perhaps it wasn’t exactly a scientific sample, but that percentage of people in a room of networking experts still stunned me.
But Does It Count?
Whenever someone says “nobody I know is using FCoE” I’m a bit surprised. There are over 13,000+ UCS customers who are using FCoE as the foundational basis for their storage system.
“But that doesn’t count, because it’s only UCS.”
I confess I’ve never understood this argument. There is no difference between the FCoE that runs on a UCS system and one that runs on, say, anyone else’s servers using FCoE. There’s no “FCoE-C” version or “FCoE-I” version, etc. It’s standards-based FCoE.
“Nobody’s using FCoE beyond the access layer.”
Let us suppose for the moment that this statement were true (it’s not, but let’s just suppose that it is). The only response I could come up with would be, “So what?”
Ultimately, you want to use a tool at the right time, and in the right place. When you have moments that you are looking to deal with access-layer issues, there are several compelling reasons why converged multiprotocol access makes sense, and I’m sure you’ve heard about them all: cable reduction, lower power & cooling, better use of underutilized links, etc.
When we look beyond the access layer, we have different challenges. We have issues about maintaining oversubscription ratios. We have issues about obtaining enough bandwidth between chassis.
Let me give you an example. A large Accounts Payable company in the United States were facing issues with oversubscriptions on their SANs. They were also having major issues getting space, power and cooling to handle additional traffic, which made planning for additional growth extremely difficult.
As I mentioned in my 3-for-2 Bandwidth Bonus blog the improved encoding mechanism with FCoE really helped a great deal without forcing anyone to go “end-to-end” FCoE.
In this case, the servers were all FC, and so they were able to connect their FC-based servers to a Unified Port-capable Nexus 5500 switch, and then use FCoE ISLs to an MDS 9500) which then connected to traditional Fibre Channel storage.
Yes, that’s right. It was a FC -> FCoE -> FC topology.
Because if they used 8gb links, they were only going to really be able to get 6.8 Gbps of actual throughput (after FC encoding). That meant that if they maxed out their port channeled connections, they’d only get less than 110 Gbps of actual inter-switch bandwidth:
Even if they had decided to use 16G links, current implementations restrict to only 8 links to a bundling:
However, if we want to take all 16 links in a 10G FCoE ISL, you can get 50% more bandwidth using 50% fewer links:
For this company, the big problem was maintaining the oversubscription ratios between switches, not necessarily the need for addressing the access layer:
However, of course, does this count as Multihop FCoE? Does it really matter?
The point is that the Data Center has another tool that can be put into the appropriate place in order to solve a particular problem. In this case, the customer was able to maintain oversubscription ratios for future storage growth, regardless of whether it was FC (native) or FCoE traffic. Moreover, by having Unified Ports on the Nexus 5500, they had the flexibility of being able to use any type of storage as required.
From the customer’s perspective, “this positions us for now and in the future. It doesn’t make sense not to move in this direction.”
Unorthodox? Perhaps. Unanticipated? Definitely. But it does speak to the point that FCoE can be a very powerful tool when used to solve well-understood problems inside of the data center.
The CiscoLive Effect
I gave several presentations this week with numerous partners, and talked to dozens of people who wanted to know the hows, whys, and whens of when they can do convergence on their networks. Overall, the level of sophistication of the questions have grown exponentially, and I’m absolutely convinced that by next year’s Cisco Live many of these partners and customers are going to be pushing the envelope in ways that we can only imagine right now.
For now, though, CiscoLive was a fantastic week for me, personally. It is indescribable to describe the feeling of watching all the hard work begin to pay off for customers.