">
Cisco Blogs


Cisco Blog > Data Center and Cloud

To Tell the Truth: Multihop FCoE

March 11, 2011
at 9:40 am PST

In the old television show, “To Tell The Truth,” celebrity panelists would attempt to decipher the true identity of one of three people, two of which were imposters.

Sure, they looked alike, sounded alike, and had very similar characteristics, but there was only one genuine article.

As we move deeper into the capabilities of FCoE, it’s now starting to become obvious that people are starting to become more sophisticated about what happens beyond the first switch. Not surprising, we start getting a lot of possibilities that look and sound alike, but are not really bona fide multihop FCoE.

Let’s start off with making something as clear as I can: there is nothing inherently wrong with any of these multitiering solutions. The question is really only whether or not the solution is appropriate for your data center.

It’s a little long as blogs go so I apologize in advance, and if I get a little too geeky, at least pretty pictures are included. Grab a cup of coffee, but I promise by the end it will make much more sense. (For you FC people, this may seem a bit remedial at times, so please bear with me as I try to reach out to our non-storage brothers and sisters.)

What is Mutlitiering?

Put simply, for our purposes multitiering means that the data center is broken down into layers, and each layer is a tier. For example, we have the layer (often referred to “access” or “edge”) where the servers connect to the first switch, which makes one tier. In turn, those switches move back farther into the data center, into an “aggregation” or “core” layer, or “tier.”

Each layer has specific design considerations and have different roles to play in the data center. This is true whether we’re talking about ethernet or storage networks.

So, when we’re talking moving between these layers, or tiers, we are using mutttiering.

What gets confusing is that from both an Ethernet/LAN perspective and a Fibre Chanel/SAN perspective, we often refer to these switches as “hops.” The downside is that when you get into the nuts-and-bolts, they don’t actually mean exactly the same thing.

What is Multihop

While it would be valuable to go into all the possible meanings of what a “hop” is in data center terms, space and respect for your time prevents me from doing this (as it is, I may be pushing it a bit).

However, because FCoE is Fibre Channel, it behaves according to Fibre Channel rules to what a “hop” is. For that reason, there are two ways to determine if you have a FCoE “hop,” and what would constitute a “multihop” scenario.

First, in Fibre Channel a “hop” is what happens when you move switch-to-switch and Domain IDs change. If you’re not familiar with Fibre Channel, and are used to the Ethernet way of understanding hops, each FC switch (generally speaking, by default) is a single domain, which identifies the switch in a FC fabric.

Because FCoE is Fibre Channel, it behaves according to Fibre Channel rules as to what a “hop” is.

For the storage administrator, this is important because the Domain ID is critical in being able to provide proper security (e.g., zoning) and run Fibre Channel-related services to each device connected to it.

In FCoE, an actual Fibre Channel switch (called a Fibre Channel Forwarder, or FCF) exists inside of the Ethernet switch. This means that each FCF has its own Domain ID, enabling the storage admin to have control of the SAN wherever there is a FCF.

When your FC switches talk to each other, they are connected by an Inter-Switch Link (ISL), and each time you connect Domain IDs in FC, you have what’s called a “hop.” Therefore, in FCoE (which is FC), every time you connect Domain IDs, you have a “hop.”

The second way you can determine whether or not a switch falls into a “multihop” scenario is how much visibility the switch has into the FC portion of the payload for forwarding decisions. In other words, in order to keep the storage traffic engineering, an FCoE “multihop” switch needs to be able to maintain the appropriate forwarding mechanisms used by Fibre Channel (such as FSPF).

Generally speaking then, a “Multihop FCoE” switch is one that continues to subscribe to the appropriate FC traffic engineering, domain creation, and forwarding.

A Port in Any Storm

In Fibre Channel what makes something a hop is also determined by not just what is connected, but how.

At its most basic, a host/server uses a “Node Port”, or “N_Port” to connect to a fibre channel switch’s Fabric, or “F_Port.”

In the same way, an FCoE switch uses what’s called a “Virtual N_Port” ( “VN_Port”) connected to a “Virtual F_Port” (VF_Port) on a switch. (The reason why it’s called “Virtual” is because the physical port itself can be used for multiple purposes)

When two switches communicate, they use what’s called an “Expansion Port,” or “E_Port.” In FCoE -- you guessed it -- those ports are called “VE_Ports.”

It’s the presence of these “VE_Ports” that makes a “hop”, because it’s what makes an ISL between two Fibre Channel Domain IDs.

The issue for understanding comes from if you put something in-between the FCFs, which you can do when you place Ethernet switches in-between.

For Fibre Channel, an ISL is one of the ways it makes a “Hop.”

So, with respect to “To Tell The Truth,” let’s meet our contestants, each of whom claim: “I am a Multihop FCoE switch.”

Contestant #1: DCB Lossless Switch

From an Ethernet perspective, it “ain’t nuttin’ but a thang” to add on additional switches, which can help provide more access to hosts and servers. To build on our simplified models, it looks something like this:

Generally speaking, this type of design tunnels FCoE traffic through “SAN unaware” switches that have no knowledge or understanding of the packet type. In other words, no Fibre Channel protocol-related activity is applied to the traffic between the host and the FCF fabric.

Of course, this is perfectly “legal” according to the FC-BB-5 standard document. Hosts get access to FCoE storage, and if the FCF resides in a switch that can talk directly to FC storage, hosts can get access to those as well. One of the advantages of this is that it expands the fabric size without expanding Domain IDs.

From a storage/SAN perspective, though, there are some significant drawbacks, namely from a security perspective. Because the SAN admin has no visibility to what goes on in the DCB lossless switch, the design can become susceptible to “man-in-the middle” attacks, where a rogue server can pretend to be a FC (FCF) switch and insert itself into the FC fabric where it’s not wanted.

This method also prevents the ability to use FC forwarding or deterministic multipathing technology, and must rely on Ethernet Layer 2 solutions. This can further complicate troubleshooting and load-balancing in the storage network, especially if the number of DCB switches increase between the “VN_” and “VF_Ports.”

If you happen to be a storage admin, this approach completely prevents the standard practice of “SAN A/SAN B” separation for redundancy. For some vendors, the solution to this is to create an entirely separate Ethernet fabric, but LAN admins will tell you that isolating Ethernet traffic from half of the data center may not be a workable solution.

Really? Creating two Ethernet fabrics that can’t talk to each other in order to preserve storage SAN separations? Perhaps I should simply leave the possibility open for the sake of having options, but I have a hard time imagining this being put into practice.

Not surprisingly, as Cisco works to support SAN operations end-to-end in the Data Center, this type of solution is not something we recommend from a storage-centric perspective.

Contestant #2: FIP Snooping Bridges

Unlike the lossless DCB bridge, which has no knowledge whatsoever about the FCoE traffic flowing on its links, the FIP snooping bridge offers the ability to assist the FC fabric by helping with the login process (as servers come online, they need to “login” to the FC fabric).

While the switch doesn’t apply FC protocol services (such as multipathing, e.g.) to the traffic, it does inspect the packets and applies the routing policies to those frames. The FIP snooping bridge uses dynamic ACLs to enforce the FC rules within the DCB network. While a deep-dive is beyond the scope of this blog, let me point you to Joe Onisick’s fantastic exploration of the subject.

Generally, this improves upon the DCB lossless switch design because it prevents nodes from seeing or communicating with other nodes without first going through an FCF. The end result of this is that it enhances the FCoE security, preventing FCoE MAC spoofing, and creates natural FC point-to-point links within Ethernet LAN. One of the other advantages is that it expands the fabric size without expanding the Domain IDs.

Cisco’s Nexus 4000 Series switches work under this principle.

Now, this is where people often get confused about whether this is a “hop” or not -- and let’s call a spade a spade: even Cisco has introduced this as a “multihop” environment, when in fact it’s a multitiering environment.

While it’s understandable on one level (after all, how many people have you heard try to introduce another concept -- multitiering -- in order to simplify a conversation!?), it really is misleading as it doesn’t address the definition of Fibre Channel ISLs, VE_Port to VE_Ports, or visibility into the FC payload. So, from a design perspective, it’s not actually multihop.

Why? Because the SAN admin doesn’t have total visibility into the fabric. Typically, Fibre Channel tools don’t see FIP snooping bridges, and FIP snooping bridges don’t track discovery attempts or login failures.

When a CNA failure occurs, admins must rely on CNA tools for troubleshooting, and there are potential load-balancing and SAN A/SAN B separation issues when these deployments start to scale.

All of this means that while FIP-snooping bridges can be a good idea for some designs, there may be other considerations at play. It also means that when you add FIP snooping bridges to your data center they are not FCoE “hops.”

Contestant #3: Multihop FCoE -- NPV Switches

Now, even for those of you with some FCoE experience, FCoE NPV switches might be new. It’s an enhanced FCoE pass-through switch that acts like a server, performing multiple logins to the FCF Fabric:

(Actually, technically speaking the port facing the FCF is called a VNP_Port, but that’s not the point nor really important right now).

In this case, the switch behaves the same way that a server does, performing multiple logins into the fabric. The advantage here is that it provides load balancing, traffic engineering, while simultaneously maintaining FCoE security and the the Fibre Channel operational SAN model.

Moreover, it addresses FC Domain ID “sprawl,” which is something that larger deployments always have to contend with.

NPV is a technology that is used with great success in the Fibre Channel SAN world, and is quite popular when data centers grow. It allows very familiar management and troubleshooting to SAN admins and provides the same benefits as FC switches doing NPIV logins.

Not only does it not use a Domain ID, but it doesn’t lose visibility into the Fibre Channel domains and keeps the zoning infrastructure intact.

Overall, when compared to FIP-snooping, it offers much greater traffic engineering options for the storage administrator.

Contestant #4: Multihop FCoE -- VE_Port ISL Switches

So, now we have our final contestant. Using the exact same model as Fibre Channel networks today, switches that communicate with each other as peers, using FCF-to-FCF (switch-to-switch) communication, meets all our requirements for a Fibre Channel “hop:”

In this case, the design is consistent with Fibre Channel “hop” rules and -- surprise! surprise! -- is defined in the FC-BB-5 standard for switch-to-switch interoperability (as I’ve mentioned before).

Since we’re looking at this from a Fibre Channel perspective, it’s important to make some observations here.

First, this gives SAN admins the most control and most visibility into all aspects of SAN traffic.

Second, as you can see there is no extra “special sauce” needed in order to run multihop FCoE. You don’t need TRILL, or any other Ethernet Layer 2 technology.

Will the Real Multihop FCoE Please Stand Up?

There it is, plain as day, a complete storage solution, providing SAN A/B separation, fully standardized (and published!), and consistent with the existing models of storage networks that exist today. This makes it much easier to “bolt-on” FCoE technology into existing environments and maintain current best practices.

It’s important to note that I am not saying -- nor have I ever said -- that any one solution is better than any other. Because each of the various designs I’ve mentioned here are built using the same building blocks, you may find yourself in an environment where your traffic engineering needs mean a little of this, a little of that, a little of something else.

What’s key is that you understand what each of these terms mean. It should also help you when someone says that they have “multihop FCoE,” you are able to understand if they are talking from a storage perspective:

  • Does it have “VE_Ports?” No? Then it doesn’t maintain consistency within well-understood FC Inter-switch storage links (ISLs).
  • Does it have visibility into the FC payload to make routing/forwarding decisions? No? Then it misses the other criteria for making an FCoE hop.
  • Does the switch make the traffic invisible to FC tools for troubleshooting purposes? Yes?Then it breaks the FC storage model.
  • Does it provide SAN A/B separation while maintaining LAN coherence? No?Then it isn’t a truly converged network.

Again, depending on the purpose of the implementation, these may be desirable outcomes. But how can you possibly know unless you first understand what the differences between them are?

If you made it this far, congratulations! With luck this (extremely long) blog cleared up some of the confusion regarding “Multihop FCoE” and you have some better understanding of how to examine some of the products that are available that make the claim.

Tags: , , , , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

11 Comments.


  1. Hi J. This is a good summary of the various approaches to supporting FCoE traffic in an Ethernet network. Thanks for taking the time to identify the various options vendors are talking about.

    But, I wanted to call attention to a few comments you made where I think some clarification maybe in order.

    You wrote: “For some vendors, the solution to this is to create an entirely separate Ethernet fabric …” which links to a blog post by Dr. Chip Copper on the community.brocade.com site. I think the sentence isn’t accurate. In that post, Chip said in part:

    “The biggest point here is that for each individual situation, there needs to be a discussion. Unfortunately, today there is a lot of buzz in the network industry implying that there is only one solution that fits all situations.

    I think you and Chip are in agreement about having multiple options and looking at the implications of each when deciding how to support FCoE traffic in an Ethernet network. As Chip pointed out, you have to consider the topology options as well as the equipment and protocol used in the data path. You have to consider the management model, available tools and failure/disaster recovery requirements. And, as I know you are well aware, TCP/IP traffic over Ethernet has very different expectations about the transport than SCSI traffic over Ethernet, so when you combine them those differences have to be carefully considered in the network design.

    For the majority of existing Fibre Channel SAN environments, customers follow storage vendor best practices when they deploy two independent SAN fabrics, aka “air gap fabrics”. Those best practices don’t change just because the network transport protocol changed because those best practices were created for other reasons. Loosing data flow to a petabyte database application resulting in rebuilding it for the next several hours has a different degree of impact on the business than a lost syllable in VoIP call, or having to refresh a browser when connected to a web site. That’s what Chip was talking about. You have to consider the business requirements and then design the network to meet them. Chip agrees with yyou point out that having options and choices are a good thing.

    Next, you said:
    “Really? Creating two Ethernet fabrics that can’t talk to each other in order to preserve storage SAN separations …”.

    with the embedded link pointing to a blog I posted at http://www.ethernetfabric.com. Again, my post points out that existing physical air gap SANs are designed for a reason and in fact are best practice for many SAN customers and recommended by storage OEMs. An FCoE design that eliminates that air gap won’t be acceptable to someone who believes in it. As I point out, there is a design choice when using FCoE, which is in agreement with your observations that choices in how customers support FCoE traffic on an Ethernet network are valuable. I just pointed out that folks can use FCoE in an air-gap architecture and I think it’s a good idea to talk about that as one of the options. So I was agreeing with you J, it’s not a good idea to pretend there is only one option.

    One other point your make in Option #4, “You don’t need TRILL, or any other Ethernet Layer 2 technology.” I maybe misinterpreting this statement, but one way to read it is there are solutions that support FCoE without the “E” being used on any of the inter-switch links. That seems like almost an oxymoron, but, that’s not the point I wanted to make. I want to point out that among the options for multi-hop FCoE traffic designs, TRILL is a viable choice and has much to recommend it. That’s one of the reasons Brocade uses TRILL in our VCS Ethernet Fabric. It isn’t the only way to meet the stringent network requirements for block storage traffic, but it’s a very good choice when you want to use a lossless Ethernet network for the inter-switch links carrying FCoE traffic (aka, multi-hop FCoE).

    J, thanks for putting the time into this blog and allowing me to provide some clarifications.

       0 likes

  2. J Metz

    Hi Brook,

    I appreciate you providing some clarification on both Chip’s comments as well as your own. Having read through both pieces, it’s entirely possible (even probable :) that I may miss some of the goals of what Brocade’s strategy about Ethernet/Converged Fabrics are, and what they mean.

    Part of the confusion (for me) comes from some of the criticism I read in your (yours and Chip’s) that entertain what “some people/vendors” are saying, but without attribution to any specific claim or link. Sadly this means that I have to try to fill in the blanks.

    The good news here is that we both agree that customer choice is a good thing, locking down deployments based upon technology *types* is one of the things that we are trying to avoid through this new technology. It appears we also agree that your examples of “acceptable loss” within a data stream.

    What’s not clear to me in what you write, however, is how the implementation of an “air gap fabric” on Ethernet side as described on your community forums and blogs actually works in practice.

    See, in order to run multiple DCB lossless Ethernet switches in a row, the element of SAN A/B separation prevents the visibility for the SAN admin into the traffic flow (as described here). As you point out, it’s of critical important to storage administrators, but it’s also one of the main PITAs when designing SANs (let’s face it!). Meanwhile, the same source hosts do not have that separation, or “air gaps” on the Ethernet side.

    It seems to me that you are taking the great any-to-any communication ability of a LAN and eliminating it, forcing a traffic duplication approach on the LAN side that, quite frankly, shouldn’t be necessary. That seems like 1 Data Center for the price of 2, with none of the actual advantages of using a converged network in the first place.

    I mean, sure, it’s great if you’re running FCoE (1 wire) instead of having FC *and* Ethernet (2 wires), but then you go back to creating an “air gap” (2 wires) but you don’t even have the connectivity between LAN segments that you had before you converged.

    In other words, it seems like you’ve just forklifted everything to a new architecture and *lost* functionality. Perhaps I’m just missing something obvious, but that’s what it sounds like at the moment.

    With respect to TRILL, there is a difference between “requiring” the technology and “using” the technology. Some of the misinformation regarding FCoE (I’m sure you’ve seen this too) seems to conflate the two. FCoE does not have a requirement for TRILL, nor does Multihop FCoE. The only time TRILL can come into play is when you have something in-between FCFs. In fact, in an Option #4 environment, TRILL is entirely irrelevant.

    TRILL’s use comes from an Ethernet implementation, which falls far more in line with Options 1 and 2 (1 primarily). For environments that rely very heavily on Ethernet mechanisms (and less on FC), Ethernet forwarding technologies will become relied upon more and more. However, there is a trade-off there as well, notably in the SAN visibility (mentioned above).

    Once we start relying on Ethernet forwarding rather than storage forwarding, we have a hard time justifying using FC/Storage-based “hop” approaches, so I refer to solutions that would use TRILL as “multitiering FCoE” over “multihop.” It’s not just splitting hairs; the design implications should hopefully be clearer based upon the reasons listed here.

    Again, the adherence to taxonomy is to help cut through some of the clutter and to let people understand that when I (personally) speak of “multihop FCoE” they understand what I am and am not referring to. It’s certainly not a value judgment placed on any solution – customers should be able to have what works for their given situation, which I believe we are still in agreement with. :)

    Thanks for taking the time to read and comment, Brook. I really appreciate the input and insight.

       0 likes

  3. Hi J,

    Just a couple of additional observations.

    I think storage management tools have matured so that visibility to traffic in SAN A and SAN B networks is quite good, as evidenced by EMC, IBM, HP, HDS tool sets. So I think that’s not an enormous problem compared to the risk of pilot error and Mr. Murphy when making design decisions. I suggest that the same tools will apply equally well to a SAN A and SAN B fabric architecture when using FCoE as they do when using Fibre Channel.

    Much of the intelligence needed about data flows, congestion, frame corruption and hardware health for a Fibre Channel switch still apply to one handling FCoE traffic. As Chip mentions, one needs to look at that and take it into consideration. Some tools may not work as well as others with FCoE which is still maturing, so that will affect the decision of when and if to use FCoE vs Fibre Channel.

    I agree with your observation about conflation of FCoE and TRILL, and as far as that goes, conflation with DCB. Sometimes the industry analysts and commentators don’t always help on that score. So, it’s important to talk about all the technical details and implications as you and Chip are trying to do. It’s a good thing.

    I wasn’t sure what you meant by this comment:
    “Once we start relying on Ethernet forwarding rather than storage forwarding …”

    What did you mean by “storage forwarding” and how is that different from “Ethernet forwarding”?

    Regarding the comment about taking the great any-to-any communication capability of the LAN and eliminating it, sometimes the fact you can do something doesn’t mean you should, and certainly not blindly. That’s the point Chip and I were making. Any-to-any communication has value, but not if using it reduces the availability and reliability of existing storage networks. No customer would want that to happen.

    One last comment, it isn’t necessary to conclude that since Chip and I talk about options, that means we are forcing anyone to any conclusion. We aren’t. Instead, we want customers to be thoughtful and consider the implications of their choices. For all our customers, storage networks carry the life-blood of the enterprise, their data, so thoughtful design is critical to the business.

       0 likes

    • Disclosure: I work for Brocade. Opinions are my own.

      [J Metz] “I mean, sure, it’s great if you’re running FCoE (1 wire) instead of having FC *and* Ethernet (2 wires), but then you go back to creating an “air gap” (2 wires) but you don’t even have the connectivity between LAN segments that you had before you converged.”

      I just want to clarify the fact that FC today is already 2 wires, since 100% of customers deploy dual-redundant SAN fabrics (air-gap design) and dual-home all their servers with two (or multiples of two) HBAs, as well as their storage devices. The only exception is tape backup traffic since there is no such thing as multipathing for tape. So your “FC and Ethernet” scenario is really 3 wires and not 2, and by going to a dual-fabric design for FCoE you would still be reducing the number of wires by 33%.

      [Brook] “For all our customers, storage networks carry the life-blood of the enterprise, their data, so thoughtful design is critical to the business.”

      Exactly. Customers deploy dual-redundant SAN fabrics for a very good reason today: they’re betting their businesses on the reliable access of their applications to their data. Just ask any customer to deploy their mission critical applications in a single fabric, regardless how many 9s of availability each individual piece of hardware has or how resilient you build your fabric. You know what the answer to that is going to be. Why would you think they’d give a different answer with FCoE?

      [Brad Hedlund] “Are customers deploying “air gaps” for NFS? How about 10GE iSCSI? No? Why not? Why should FCoE be any different?”

      How many customers are running their top tier applications, those that, if they go down, mean many millions of dollars of losses, or even going out of business, on these protocols? How many of the largest mainframes supporting the operations of some of the largest financial institutions in the world are running on iSCSI?

      Oh wait…

      [Brad Hedlund] “The fact is, it is possible to provide SAN A/B isolation for FCoE in one converged infrastructure.”

      It’s not about what’s possible from a technology point of view, because today it is perfectly possible to deploy a single fabric SAN and everything would work just fine. Or even provide SAN A/B “isolation” in a single fabric, using VSANs or Virtual Fabrics. How many customers are running those mission critical applications I just described earlier that way? Why do you honestly think they will be willing to sacrifice the ultimate isolation (the physical one) when going to FCoE?

      Other than that, I fully agree with you and Brook that choice is a good thing for customers, and that you’ve done an awesome job at clarifying the different approaches to multi-hop FCoE or even to what can be understood by the term multi-hop.

         0 likes

  4. The convergence train has left the station and it doesn’t look like everyone made it on board.

    You can’t have an “air gap” for FCoE without building two completely separate back end networks just for FCoE, in addition to the IP network. That’s three 10GE networks and lots of 10GE adapters required in the server. At that point, why even bother with FCoE? Just stick with gold old FC.. oh, wait.. maybe that’s the path somebody is trying to keep their customers on.. ;-)

    Are customers deploying “air gaps” for NFS? How about 10GE iSCSI? No? Why not? Why should FCoE be any different?

    The fact is, it is possible to provide SAN A/B isolation for FCoE in one converged infrastructure. Cisco’s FCoE platforms are capable of this and the FCoE best practice deployment guides show customers exactly how to do this. Furthermore, Tier 1 storage manufacturers (EMC, NetApp, HP, etc.) have sold and supported these designs for quite some time now.

    Cheers,
    Brad

       0 likes

  5. For some reason my comment got posted out of order… Sorry for the confusion this might cause. :)

       0 likes

  6. Folks,

    I, for one, have been troubled by the lack of market traction for FCoE solutions. Analysts report continued customre reliance on and growth of Fibre Channel for block storage SANs as each quarters sales data rolls in. I think we should ask “Why?” Shouting louder doesn’t alter the facts, it just makes it harder to hear.

    My hypothesis is that the early marketing hype that FCoE required a converged physical network is a likely reason. Said differently, if the customers spending the most money on traditional Fiber Channel aren’t buying the converged network story, maybe there’s a reason, and discussing an alternative FCoE deployment option makes sense.

    On the topic of NFS and dual networks, you will find storage admins who deploy the block storage network to the disk farm on a separate physical network from the TCP/IP file server network connected to the clients. There are good reasons for that decision. Likewise those who insist that the management network is “air gapped” from the TCP/IP network. There are good reasons for that.

       0 likes

    • Not to mention that best practices from most major iSCSI storage vendors require a completely separate (air-gapped) network for iSCSI storage. There are also good reasons for that.

         0 likes

  7. FCoE is about convergence. Fewer adapters, fewer switches, fewer cables, lower cost, doing more with less. The cheese has moved and these things take time to get sorted out, I fully understand that.

    If you’ve been the traditional FC switch vendor for a long time, it makes perfect sense that you would want to keep the cheese where its always been; the same separate FC(oE) network that you can continue to sell, and the same separate Ethernet network somebody else can sell.

    Other vendors however have been leaders in both FC and Ethernet networking for quite some time and are perfectly positioned to guide the customer through the evolution to convergence.

    The difference is clear.

    Cheers,
    Brad

       0 likes

  8. Great post J. Brook and Juan, SAN management tools (ours included) have tended to lag behind the “state of the art”. One example (only recently being addressed) has to do with the lack of visibility into N_Port Virtualizers (Brocade AG mode / Cisco NPV mode). Despite the management concerns introduced by these devices, they have proven to be very popular with our Customers because they solve a number of problems (e.g., the max Domain ID limitation and the FC-SW interop problem). The same could be said about FCoE today. A “single pane of glass” management tool that will allow one administrator to manage both LAN and SAN would be nice to have. However, seeing as how the majority of our Customers are still trying to come to grips with the business and organizational implications a converged network presents, I think the most important areas to focus on have to do with basic converged network topologies and features. In this area, FCoE is accelerating at a nearly overwhelming pace.

    In regards to the discussion about the “air gap requirement”, as with any new (and potentially disruptive) technology, you have to evaluate it from a risk/reward point of view. 15 years ago, when open systems host connectivity consisted of direct attached SCSI, the idea of doing block I/O over a fabric was anathema to many in the industry. Statements like “if you lose a switch, you’ll lose connectivity to storage from all of your servers” were commonplace. However, over time, these same people came to see the value a fabric provides and concepts like “air gap” (SAN A / SAN B) topologies were introduced to address our customers concerns and to remove some of the risk from moving to a fabric topology. Over the years, the types of problems prevented by the SAN A / SAN B topology have proven to be “logical” in nature and not “physical”. By this I mean it is far more likely that a user will accidentally cause a loss of connectivity than a single switch will misbehave and take out an entire SAN. In those rare cases where I have seen a software bug take down a SAN, the problem has been isolated to that SAN and has not been propagated through something like a SAN Router (e.g., IVR/FCR). With all of this in mind, I think it is clear we need to maintain the SAN A / SAN B approach from a logical point of view. From a physical point of view, due to the availability of features like vPC and MCT, and given how attractive our customers find the concepts of active/active teaming/bonding enabled by vPC and MCT, I think it makes sense to make topologies available that collapse the air gap at the access layer while allowing for a logical SAN A / SAN B topology to exist. Will these topologies be appropriate for all environments? The answer is “of course not”! At the end of the day, we need to allow the customers to choose whichever topology best fits their needs. In this regard, I agree with all of the comments I have seen on this post so far.

       0 likes

  9. I’m not a bit surprised to hear that the deployment rate for FCoE is low; we’re at the front end of the adoption curve. One thing that I think has held a lot of people up is waiting for a true multihop implementation, and Cisco’s announcement yesterday should help alleviate that. Another hold up has been management’s collective ignorance about the protocol and its advantages, and once they learn more about it we’ll see FCoE mandated as a cost-saving measure. I’m sure there are a dozen objections to the technology ( just as there were for MPLS and any number of other technologies) but that won’t stop it from becoming a cornerstone of future data centers.

    Name a widely-accepted technology that didn’t have objections leveled at it when it first showed up.

    We’re just coming out of the “early adopter” stage of acceptance, and I think the industry will see a sudden surge of deployments as we move into the next phase very soon. Time marches on, and FCoE is nothing but another milestone in the progress of the data center. Railing against it is simply spitting into the wind.

       1 like