Cisco Blogs


Cisco Blog > Data Center and Cloud

Storage Distance by Protocol, Part III – Fibre Channel

September 10, 2013
at 2:57 pm PST

In the previous article we looked at some of the physical characteristics of building a SAN extension. In other words, we looked at the different ways there are to “build the pipe.” We didn’t, however, get the chance to talk about the speed or capacity of the pipes, nor did we talk about the various methods to fill the pipe with SAN data.

In this article, we’re going to look at the first of four specific methods of how we can extend SANs across distances using those pipes: “Native” Fibre Channel (FC). Understanding how FC works becomes critical for understanding how distance solutions are resolved using the technology, and that in turn leads us to understand how something like Fibre Channel over Ethernet (FCoE) differs.

Afterwards, we’ll take a brief look at how the pieces fit together and are part of the process for building a strong solution.

Fibre Channel

There are a couple of things that we need to understand about Fibre Channel when examined under the lens of extending SANs.

First, FC is a source-based mechanism for allocating memory in order to handle the amount of data that can be sent from one place to another.

A quick review: Fibre Channel uses a system called “Buffer-to-Buffer Credits” that ensures that each frame of FC goodness gets sent in the correct sequence.

BBcredits intro

Once the two devices negotiate how many BB_Credits they have, they can begin communicating. The key thing to remember here is that both devices know how many credits there are.

Because of this, a sending device can continue sending frames (one credit equals one FC frame) to the receiving device because it knows that the receiving device can handle that many frames before it runs out of memory.

baseball_credits

Effectively, when a message is sent to a receiving switch, it expects an acknowledgment back so that it knows the receiving switch is accept more frames. If it doesn’t get the acknowledgment back, it will only send as much information as it has buffer credits, until it gets the acknowledgment it needs to continue.

Buffers, Credits, and Distance: Oh My!

Buffer to buffer credits (BB_Credit) are negotiated between each device in a Fibre Channel fabric. Each buffer is used for each Fibre Channel frame, regardless of the frame size. In other words, small FC frames use the same buffer space as large FC frames. In Data Centers, when the distances between switches is not very large, there is no problem filling the pipe:

short_distances_fill_pipe

Because of this negotiation, you can control the amount of buffering for each hop, rather than being limited to a ubiquitous configuration setting across the entire network. As you can imagine, having this kind of flexibility is great for extending that middle portion across longer distances.

Donating Buffers to Get Extra Oomph

Now, most switches have many ports attached to a single ASIC chip, and each port has a dedicated amount of buffer credits. If we have a longer distance link between switches, a sending switch may send out all of its buffered frames and have to wait to get a response from the receiving switch. This message that the switch is ready (called, appropriately enough, a “Receiver Ready (R_RDY) frame) paces the traffic flow. Each time a sending switch gets a R_RDY frame it frees up a buffer credit to send more data.

countdown

Until the sending switch gets that Ready frame, it can only transmit up to the number of BB_Credits before the traffic is throttled. In longer distances, this would mean that a lot of that pipe is not getting filled up, just so that we can wait for the acknowledgments to return:

longpipes

In order to be more efficient, we can ‘borrow’ the credits from another port, shut it down, and donate those buffer credits to the port that has the extended distance. Exactly how many ports can be shut down and how many buffer credits can be donated depends on the switch or line card module. On a MDS 9148, for example, a port has a default of 32 buffer credits, but can use up to 125, which means that we can take 3 ports offline and donate all of their credits to one distance port.

(NB: In case you’re wondering why it’s only 125, each port in an active group must have at least one buffer credit, and since there are 4 ports in a group it would be: 125+1+1+1=128 credits for the entire port group).

Second, when extending native Fibre Channel links, there is an inverse relationship between the speed of the connection and the distance it takes to send a frame. It’s a bit easier to see in pictures:

frame speed distance

Obviously, if you want to go fast and you want to go far, you need a lot of buffers. Also note that this assumes a frame size of about 2Kb. If you have smaller frame sizes you will need more credits (remember 1 credit = 1 frame, no matter what its size).

If you’re looking to do some quick-and-dirty math, here are some general guidelines. As always, your mileage may vary (get it? Mileage? Ah, never mind…):

for every kilometer

Depending on the model, the MDS 9000 series FC switches (e.g., MDS 9148) can have up to 255 buffer credits per port without any additional licensing. This means that wire-rate 2Gb/s FC is attainable for about 255 km! For 1 Gb/s, that means you can have wire-rate FC for up to 510 km!

Now here’s the really cool part. With the  Advanced 8G modules, for instance, you can have up to 4095 buffer credits per port. Here, I’ll do the math so you don’t have to:

advancedmodules

The new MDS 9710 has the ability to do up to 500 km @ 16GFC. Obviously, different line cards and different switches have different buffer credits to allocate for the purpose of distance. But it’s important to remember not to get blinded by the distances listed here. Don’t forget the physical constraints of the network that we discussed in the previous article.

While it falls outside the scope of these articles, there are also topology considerations as well. The point here is that SAN extensions for Fibre Channel are directly related to the amount of information stored by the sending switch. When we start looking at FCoE in the next article, we’ll see how this dynamic changes.

Summary

Of course, this is just the basic, atomic level for understanding how FC works. Whether we’re talking about the Nexus 5000, Nexus 5500, or MDS product lines, all Fibre Channel works the same way. Different switches have different capabilities, but they all fall under the same general principles.

In the next article, we will be talking about Fibre Channel over Ethernet, and how it is both similar to, and different from, Native Fibre Channel and how that affects our planning for distance.

In the meantime, here are some quick links to previous articles:

If you have questions or comments, please feel free to leave them below.

Tags: , , , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. Hi,

    You still have to do with latency extending FC over these distances.

    A dark Fibre link adds a latency of around 5 micro seconds per km. Typical SCSI transactions require a transaction to transverse the link 8 times or four round trips.

    This means a dark Fibre link of 30 km adds a latency of 5 µs * 30km * 8(trips)= 1200 micro seconds(µs) = 1.2
    milliseconds (ms) which is negligible for most applications, but it could affect time sensitive transactional.

    From a storage point of view, only writes will transverse the Dark Fibre. If I am correct, from an IOPS perpective, only 833 IOPS remain when calculating with a latency of 1,2ms over 30KM. Is this a correct statement?

    Or does the Cisco MDS series has some kind of smart technology on-board to accelerate this?

       0 likes