Avatar

In the past articles we’ve talked about doing distance extension for SANs, focusing first on building the physical elements that are required, before moving on to how Fibre Channel can be extended using buffer credits.

In this article we’re going to talk about how it is best to think of extending Fibre Channel over Ethernet (FCoE) SANs (finally!).

I know, I know, I start off this whole shebang by saying I’m going to talk about FCoE and distance, and it takes me this long to get to it? Sheesh!

The FCoE story

In the first article we did a brief overview, and mentioned how FCoE was a receiver-based system for flow control. Let’s revisit the basics for a moment before we start looking at how we can move beyond them.

In FCoE, the mechanism for keeping the link lossless is called Priority Flow Control (PFC), instead of buffer-to-buffer credits.

(Yes, yes! I hear you say. But believe it or not, despite the fact that I say this over and over again I still get questions about how many buffer credits FCoE line cards have!)

So if we look at the picture from the overview once more:

FCoE lossless bucket

We can imagine that the receiving switch has a certain “bucket” of memory that it can hold the incoming frames. Generally speaking, it’s not a perfect metaphor because the bucket doesn’t actually hold the frames indefinitely – it passes them onwards to another destination.

So, if we suppose that the outgoing frames are leaving slower than the incoming frames, the bucket of memory will start to fill up. The switch (or more specifically, the ASIC on the port) needs to have enough memory to handle the frames it’s received and the frames that are already in the pipe.

At some point in time if the bucket gets too full, it will have to send a PAUSE frame back to the sending switch in order to get it to stop sending for a while. During the time it takes to send that message, more frames will be coming in the pipe, so it needs to be able to have enough memory for the frames coming in after the PAUSE is sent.

What this means is that while the mechanism for creating lossless traffic is different than in native Fibre Channel, the function is still the same. But it also means that the distance that FCoE can go is, ultimately, dependent upon the memory that the ASIC holds.

What This Means

In general, if you have a network that extends across vast distances (keeping in mind the physical limitations listed in the previous article) that is not highly congested, it is conceivable that you could run FCoE on it. Remember, the question remains how fast the ‘bucket’ on the receiving switch forwards the frames out. If there is no congestion or competition for bandwidth capacity on the receiving switch, there could be a continuous stream of FCoE frames processed and forwarded with no disruption to service.

When we plan SAN networks, however, we don’t plan for “normal” use cases; we plan for worst-case scenarios. We expect that things can (and will) go wrong, and for that reason we place a limit on how much distance can be supported if a switch were to get hammered all at once, forcing competition for available bandwidth.

As of this writing, the maximum distance that FCoE is supported is 80km, available on the F2E line card on the Nexus 7000 series switches. By way of comparison, the maximum distance for the Nexus 5000 series switches is 3 km, and FEX support is limited to a distance of 300m. It’s always best to think about whatever devices your linking, the one with the lowest common denominator will be your distance limitation.

The All-Or-Nothing Conundrum

One of the things that’s good to keep in mind that FCoE is Fibre Channel. That is, you should think about the best way to manage the Fibre Channel flows inside and outside the Data Center; use the best tool for the job:

combining protocols

The frames are still Fibre Channel, regardless of whether they are encapsulated or not, and regardless of whether they are on copper or optical cables, and regardless of what optics/transceivers are used. As long as each stage is qualified and supported, the hop-by-hop nature of Fibre Channel gives it a high degree of flexibility in terms of deployment capabilities.

It’s important to note two things here. First, FCoE was designed to be an intra-Data Center technology. As I’ve mentioned several times before, FCoE was never supposed to be a cure-all for every issue that the Data Center has or could ever have.

Second, having said that, it is because people are looking at FCoE to address additional needs in the Data Center that Cisco is continuing to push the boundaries and capabilities of the protocol in particular, and converged networks in general. As Cisco has pushed out the FCoE distances with more powerful ASICs (starting with 300m in 2008, then 3000m, then 20km, now with the F2 line card 80 km!) we will continue to see additional capabilities, features and – yes – distances.

The point remains, however, that as we have seen there are far more variables to consider than just how much memory the ASICs  can handle! None of these new capabilities can change the laws of physics nor the underlying rules and regulations of the physical components. So it’s important to keep those in mind.

Next Up: Going Really, Really Long Distances

Up until this point in time we have talked about the “native” Fibre Channel and “encapsulated” Fibre Channel. In the next article, we’ll be talking about taking the Fibre Channel frames across a much more vast expanse, using Fibre Channel over IP (FCIP).

In the meantime, here are some quick links to previous articles:

If you have questions or comments, please feel free to leave them below.



Authors

J Metz

Sr. Product Manager

Data Center Group