Cisco Logo


Data Center and Cloud

In 1950 Time magazine published an article about an apocryphal story about former U.S. Senator (D-Florida) George Smathers:

According to the yarn, Smathers had a little speech for cracker voters, who were presumed not to know what the words meant except that they must be something bad. The speech went like this: “Are you aware that [opponent] Claude Pepper is known all over Washington as a shameless extrovert? Not only that, but this man is reliably reported to practice nepotism with his sister-in-law, and he has a sister who was once a thespian in wicked New York. Worst of all, it is an established fact that Mr. Pepper before his marriage habitually practiced celibacy.”

Brilliant! If you were in Pepper’s shoes, would you deny these types of charges? How can you face people who look at you with suspicion when not only are the accusations true, but can actually be the right things to do? (especially in 1950)

I love that story. Even though it’s false (Smathers reputedly offered $10,000 to anyone who could prove that he said it, an offer that went unclaimed up to his death) it provides a cautionary tale of just how someone can use an audience’s confusion against their opponents and yet still be telling the truth at the same time.

What does this have to do with Converged Networks?

As I read through some of the recent blogs and articles the past few weeks, I’ve been noticing a more-and-more obvious attempt to do this very thing of the Smathers Principle. A lot of the criticism that has been levied against Cisco’s true Multihop FCoE has been done this way: there are truths to the criticism, but they rely on the audience “not to know what the words mean except that they must be something bad.”

Converged Networks

Cisco has been talking about Converged Networks for a long time now -- so much so that it is often accused of “going it alone.” Even a quick cursory glance at the list of vendors working on interoperability shows it’s obvious that there is an industry-wide initiative to provide customers with an Ethernet-based roadmap.

But what does “Converged Networks” actually mean?

See, from Cisco’s perspective, it’s about the goal of converging LAN and SAN onto a consolidated network infrastructure in toto.

“Yeah, yeah, yeah,” I hear you say. “Blah blah blah, FCoE. I get it.”

But think about this for a second. This means that we’re not just talking protocol frames, and not just talking traffic. We’re talking about also taking the design principles, best practices, and tried-and-true methodologies that have made both LANs and SANs successful.

Let me repeat: One of the things that converged networks must do is preserve the best practices of both LAN and SAN designs! Otherwise you’re not converging, you’re annexing!

How do you do this? You ensure that your LAN designs can have the high-availability designs that come from any-to-any connectivity, flow control, and forwarding, and SAN designs maintain the SAN A/B separation, high availability, and flow control that they require.

To do anything else is to subordinate one type of traffic to another.

Some people are completely happy with the idea of subordinating their lesser-understood networking brethren to an “also-ran”.

Take, for instance, this question taken from the blog “This is not the convergence you were looking for:”

More recently, Fibre Channel switch vendors have come up with another solution. Why not run a full Fibre Channel stack on every FCoE switch and make each hop a Fibre Channel Forwarder (FCF) obviating the need for Ethernet to provide the flow control and routing functions? This actually sounds like a very simple idea and addresses the problem of deploying multi-hop FCoE networks without the need for QCN.

But the question has to be asked: Is this really convergence? [Emphasis in the original]

Yes, Virginia, this really is convergence. The reason for it is simple: we are allowing the SAN designs to remain implemented as they have always been.

Mr. Munjal, the author of the blog above, continues to work the “also-ran” angle:

If Fibre Channel switch vendors say you need a layer 3 protocol to reliably transport storage over Ethernet, can we at least choose IP as that layer 3 protocol and maybe use iSCSI instead?

See what I mean? According to Mr. Munjal, storage admins should simply forget all of the Fibre Channel design practices they’ve used for years to get reliable, high-quality service out of their storage networks. (I’m also unsure about the dig at FC vendors: it’s also not clear how someone might run any storage on Ethernet without it being at least Layer 3, but I guess that’s not important right now).

Sit down, shut up, you’ll get your storage when it gets there. And you’ll like it.

By having a fully functional FCF (or FCoE NPV capability), as I’ve described in detail before, the SAN admin gets exactly what he’s been used to: full visibility into his storage fabric at every stage of the network, with complete access to troubleshooting tools.

There are no black boxes that blind them to what’s going on with their FC traffic, security implementations are still in place, and there is no reliance on Ethernet-based forwarding mechanisms to replace what they’ve been doing. In other words, true Converged Networks means you do not have to sacrifice anything. You do not get punished just because you move to another transport mechanism.

So yes, Converged Networks actually do mean that you have to treat both networks equally; you can’t simply say that what’s good enough or appropriate for LAN designs will automatically transfer over to SAN designs. To do so is the height of arrogance and disrespect for years of best practice SAN design.

Not cool.

The Mononetwork

Can you actually say that you have a converged network if you are not accommodating the design principles for both networks? There seems to be this push for what I call a “Mononetwork:”

When you advocate a goal to subsume the needs and requirements of one network in favor of another, you’re pushing a Mononetwork.

When you try to treat an entire industry’s best practices as insignificant because your proposed solution is “good enough” (to you), you’re pushing a Mononetwork.

When you hope that you are getting the majority of the functionality without actually having to take into account the needs of the other, you’re pushing a Mononetwork.

In my experience those who are the most vocal against FCoE are those who have little respect for, or grasp of, the nature of resilient storage networks.

To them, a converged network isn’t really converged. It’s simply an Ethernet network that oh-by-the-way provides you access to FC storage. There is nothing wrong, they say, with simply borrowing some of the aspects of storage connectivity, but otherwise ignoring the fact that storage networking is actually more than just a network of storage connections.

They don’t care about SAN isolation/separation. They don’t care about maintaining Fibre Channel best practices, high availability, high performance, or troubleshooting mechanisms. They don’t care about offering customers implementation and deployment choices. While they simultaneously recommend their own rip-and-replace solution, they attempt to use this Smathers Principle to indicate that Cisco is somehow doing a Bad Thing™ by keeping consistency across LAN and SAN designs -- both past, present and future.

Let me give you another example. I’m afraid I’ve lost the source so I cannot properly cite the original, but someone forwarded me this particular criticism (different vendor than above):

Cisco announced their ability to transport FCoE packets across a Nexus 7000. However that ability cannot use the fabricpath [sic] protocol. Thus they offer a unified fabric for L2 Ethernet traffic, a different fabric for FCoE traffic, and yet another distinct fabric for L3 traffic. It is not clear what part of the term “unified” they do not understand.

Wow.

It’s difficult to know where to begin with this. The Smathers Principle is in full effect here, as this little bit of word mis-direction is true (up until the last sentence, that is, and I’m not really sure what exactly they mean by a L3 fabric to be honest, but I’m willing to give them the benefit of the doubt for the sake of argument).

Apparently our anonymous author is mistaken the notion of “unified” in the same way as Mr. Munjal, albeit with a different execution. That is to say, yes, these statements are true for the most part, but it’s said in a way that is supposed to make people think this is a bad thing.

We maintain that it is important to keep the OSI Layers separate (they are separated on the OSI model for a reason). It’s not clear what benefit the author believes Cisco will gain by breaking years of well-understood design principles and known interoperability.

The key thing to “unified,” or “converged,” is the fact that you can do all of these fabrics using the same equipment and infrastructure.

Doh! Wait a minute! You’re telling me you can unify these different topology scenarios onto one physical infrastructure? You mean that it’s completely interoperable with the exact same design principles and best practices that have become well-understood in both the LAN and SAN industries?

Sorry, my friend. It does not appear that Cisco is the one who has a misunderstanding of what the phrase “unified” or “converged” means.

Summary

If you don’t take the design principles of both network infrastructures you cannot claim to have a converged network. At best, you are merely subordinating one for the sake of the other. This is not converged networking, this is creating a Mononetwork with the hope that you are getting the majority of the functionality without actually having to take into account the needs of the other.

If we take the lesson from Smathers’ legend, we can see that we need to understand more than just the fact that it’s said in a manner that implies that it must be bad, or undesirable. Most people I know what to take what they have and move forward, not replace the systems they have in place with a Mononetwork that merely approximates the end result but ignores the rest.

The key, as I’ve said before, is to provide choice without forcing people to choose. That is what Converged Networks do, and Mononetworks don’t.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

19 Comments.


  1. Dmitri Kalintsev

    J,

    Great post, thanks!

    Google points to the following URL for the quote you’ve included: http://forums.juniper.net/t5/Architecting-the-Network/When-is-a-Fabric-not-a-Fabric/ba-p/84022, author being Andy Ingram.

       0 likes

  2. Erwin van Londen

    Hi J,

    Funny to see that Brocade didn’t participate in the plugfest and neither did the major storage companies besides NetApp. (EMC,IBM,HDS,DELL and all the others were absent)

    Lets assume a hypothetical case were a customer has implemented FCoE and runs a Cisco networking environment and Brocade FCoE gear. Does Cisco formally support that and does Brocade do the same thing vice versa?

    Don’t try to obscure the real life scenarios with marketing mumbo/jumbo. We both know that the FCoE protocol itself is open but when it comes to implementation all connectivity vendors try to crank in some market differentiators which is not interoperable with the rest of the market. We saw this when Cisco came out with VSAN’s which nobody had any idea on how to work with it. If you had a Qlogic, Brocade or any other switch all of a sudden you lost interoperability and could throw the stuff away.

    I.e. when you’ve made you decision for a vendor you’re hooked no matter how much plugfests you have. I’m sure you can hook up an Emulex CNA to a Nexus but try to do the same thing with a Brocade switch and a Nexus whilst maintaining all functionality of both switches and get formal, public, statements of both Cisco and Brocade (and all the others as well) that from a post-sales perspective this is all supported so that when customers log calls they are not send to lala-land with excuses that it isn’t supported.

    I’m glad we agree to dis-agree. :-)

       0 likes

  3. Hi Erwin,

    Thanks for joining the party.

    I must confess that I’ve read your message a few times now, and I’m not entirely sure where we ‘agree to disagree,’ as you haven’t actually mentioned anything with respect to the blog post itself. It seems to me that you simply took the opportunity to express some frustrations you have with the industry itself.

    Nevertheless, I don’t think it’s as simple as “marketing mumbo/jumbo.” That seems like the intellectual equivalent of “”Nuh-uh!” and, quite frankly, I believe that does a disservice to those who have legitimate questions as to what we mean when we say “Converged.” After all, it’s one thing to have a debate if you’re using the same starting point, but when HP and Juniper (as cited in this article) are indicating that Cisco means something that we don’t it deserves to be called out.

    For what it’s worth, I do agree that interoperability remains a key issue. It seems your frustration lies primarily with Brocade’s decision to block the interoperability, not Cisco’s. The one example you cite – VSANs – is a feature addition to the way storage networks work, not a deliberate blockage that results in a feature reduction. It may be appropriate to rail against the latter, but to object to the former would be to stifle any innovation because it doesn’t cater to the lowest common denominator.

    Your position winds up being untenable when you start to simultaneously complain that vendors don’t share all the same functionality and yet complain that they don’t innovate or solve problems with any view to the future. They are simply mutually exclusive positions: you must pick one.

    Now, I cannot speak for Brocade (obviously!) and cannot predict what they will or will not do in the future. But as closely as I have been working with FCoE, Fibre Channel and DCB plug-fests, and Cisco’s Engineering teams, I can state unequivocally that interoperability has been a constant goal for us. Current plug-fests are under NDA so I cannot speak to who is or is not participating at this point in time, but I can say that new vendors are being added with each iteration.

    To return to the original subject of the post, from which your comment deviated greatly, the focus should be on what choices customers want to implement. Do they want to rip and replace and put in a completely new system? Do they want to give up the designs that they’ve been using for new ones? Or do they want to continue with their best practices?

    I don’t think these are marketing “mumbo/jumbo” questions. I think these are legitimate architectural questions that should be asked, and they should be answered honestly.

    J

       0 likes

  4. Erwin van Londen

    Hi J,

    Seems we disagree to disagree. Does this mean we agree?? :-)

    Now, don’t get me wrong I have nothing against any vendor nor am I frustrated on developments from any vendor. You referenced an article on the FCIA website which you summarized as an “industry wide initiative” however I only see the connectivity vendors (+ NetApp and HP) of which one of the major players (Brocade) is missing (not Cisco’s fault but it summarizes the “industry wide” statement) . I cannot classify this as an “industry wide initiative” the more that all major storage vendors are also missing.

    If interoperability is such a big thing in FCoE it shouldn’t be a problem to get a formal statement from all the connectivity vendors that customers can connect anything to everything irrespective of vendor or equipment. Do you dare to publish such a statement???

    Your argument on innovation falls also flat as soon as these result in vendor lock in which more or less puts customers in a jail shell. What should have been done is, in the VSAN case, creating an open standard on VSANs/Virtual Fabrics or whatever you call it. But that’s done and dusted so lets forget that. I just mentioned it as an example.

    I agree 100% that customers have to keep their best practices, designs and procedures but as we’ve discussed before when you combine two different technologies which have a 100% different mindset and requirements as networking and storage by definition have, this will become a problem in many if not all organizations and one or the other has to give way. There are many restrictions in FCoE.

    I think the industry would have done the world a better service if they found a scalable, resilient and easy to manage replacement for SCSI than trying to encapsulate a 3 decade old protocol into an 2 decode old protocol into a 2 year old protocol but some people would argue that as trying to boil the ocean.

    It’s obvious that we’re on different sides of the fence. You evangelize FCoE whilst I see on a daily basis all troubles that customers have with fibre channel only even without the additional complexity of FCoE. If we would add the FCoE part I’m very certain this will increase.

    BTW. Any plans to come downunder? We should have dinner then. Let me know.

    Regards,
    Erwin

       0 likes

  5. Hey Erwin,

    You raise several great points. As I have mentioned over and over again, FCoE and Converged Networks is not, and should never be, considered a panacea for the problems of the Data Center. There is no question that there is no one-size-fits-all solution that will fix every issue that customers have (interoperability or otherwise) and have that state maintained into the future.

    Thing is, you raise a lot of issues within the Data Center. There’s no way that I could possibly address them all in a single blog post (well, and expect people to actually read it! On the issue of vendor lock-in, Joe Onisick wrote a brilliant essay on the subject and I tend to share his perspective. It seems to me that if a vendor’s solutions are working for a customer and there is a continued value exchange between the vendor’s offerings and the customer’s needs, it’s questionable as to whether or not it truly constitutes a “lock-in.”

    For instance, it’s been my experience that many administrators tend to have a single vendor for backbone infrastructures because it’s too much of a PITA to try to troubleshoot across vendors at 3 a.m. when something goes wrong. But, that’s just my perspective, so take it for what it’s worth.

    As far as interoperability goes, I have asked Brocade that same question point-blank on another blog, but have not received any response. But then again, is it really a fair question to ask? I mean, is it fair to ask any one vendor to try to anticipate what each and every other vendor is doing or going to be doing? It seems to me that one of the reasons why people look to standards in the first place is because we can say “Yes, we adhere to the standard and interoperable seamlessly with the standard way of solving that problem.” This, to me at least, seems to be a reasonable way of keeping each vendor responsible to the act of interoperability.

    You’re correct that I disagree with you about the introduction of complexity that you assert FCoE brings to the table. The entire point about FCoE is that it is both Fibre Channel and Ethernet, keeping the same principles and rules of both (hence the purpose of this blog, for instance). While there are advantages and disadvantages to the FC approach to storage, FCoE introduces nothing new into that debate, and therefore it is a wash from the protocol’s perspective.

    Your point is well taken about the limits of SCSI as a foundation, of course. The issues with SCSI are widely known and often debated (especially from people on the Ethernet side of the house). But your scolding vendors for not coming up with something new and different than the protocol that has being used for 30 years, as you say, is also not necessarily fair.

    For instance, we have had additional approaches to storage during this time. We’ve had different interconnection approaches. PCIe over Ethernet, ATA over Ethernet, even InfiniBand over Ethernet (or, just ignore the “over Ethernet” altogether”). How interoperable are they for how data centers currently operate today? Talk about your Rip and Replace! New tools, new equipment, new management, and often new people.

    That, too, is not a Converged Network. That is a Mononetwork, and new best practices and design principles will have to be developed for that as well.

    A technology can only be as brilliant as it can be implemented, or will be implemented. One of the reasons why I get excited by this notion is that FCoE interoperates with existing DCs as well as new ones, that you can “bolt on” the technology and don’t have to do an intellectual refresh, let alone an equipment refresh.

    As far as coming to Aus, I hope to do that one day. I’ve never been, but I look forward to it. Besides, there’s great swing dancing in Sydney and Melbourne (Perth too, but I don’t think you’re there). Save me a seat at the dinner table. :)

       0 likes

  6. Phillip Ulberg

    So, I’ve learned quite a bit from these posts and comments, but i still find my self on the FCoE/iSCSI fence. I am fortunate that my interest comes from being able to “greefield” a new DC deployment, and that i don’t have to deal with any legacy infrastructure.

    We will be deploying UCS blade infrastructure, and will most likely go with 10g iSCSI due to the choice of SAN/NAS vendor (BlueArc, they have a niche in the litigation support field in which i work).

    I see all of the FCoE benefits, but i think it’s trying to solve a problem i don’t have, and that for my scenario 10g iSCSI will still reap the benefits of cable/port consolidation, industry std. ethernet, ease of deployment.

    Phillip

       0 likes

    • Hi Phillip,

      Great to have the opportunity to build a DC from the ground up. I’ve had similar experiences in the past and can be very rewarding.

      I see you make the move to 10G Ethernet and run iSCSI over that. That is a different story than the one we’re having over here.

      I would be interested in your experiences when your done and how things perform.

      Regards,
      Erwin

         0 likes

  7. Hi J,

    Sorry, am a bit late with responding. Had to take a Cisco training. (Not FCOE :-))

    I don’t disagree that a converged network is bad however I think FCoE is the wrong vehicle for it. We’ve had block storage (SCSI) and TCP/IP (and a dozen other protocols) already running over fibre channel since the beginning of time. What we’re doing now is adding another layer of complexity.

    As for interoperability you know is very often a extreme hassle especially when blade systems are involved since these come with a certain switch or gateway from a certain vendor. This means that all OEM’s have to extend their interop tests which take a lot of time and money. I don’t know what Brocade’s or Cisco’s official policies are but I do know that both companies get goosebumps when OEM’s log TAC or Brocade-support cases where multiple vendors are involved. Adding complexity with FCoE in a multivendor environment will make this almost un-workable.

    You mention that you disagree that FCoE adds complexity but I don’t see this from an operational perspective. We currently have two solid networks in the datacenter (Ethernet and FC) which have by nature very distinct differences. The industry (mostly Cisco pulling the bandwagon) is now trying to glue these two together. For me it’s impossible that when you add stuff (FCoE protocol in this case) to two different objects (Ethernet & FC) you reduce complexity. The only thing you reduce is PCI slots in a server and a slight reduction in your cabling system. On my blog I’ve already shown there is absolutely no reduction in power and cooling so that’s a no-go.

    From an tactical, operational and technical perspective you add more overhead.

    Let take an example:
    Lets say the datacentre currently runs a converged network and the networking team want to change from STP to MSTP or RSTP for whatever reason. Can they do that without disruption on the storage side? No they can’t. Each and every piece of FCoE kit that is connected to the ethernet has to be taken offline, make the change, and bring it back online again. Very nasty things happen when they do it without informing the other party. The storage teams will be very busy with restoring data from tape when this happens. I’ve seen examples over and over again where network admins make changes on a WAN link which also caries FCIP traffic for remote copy purposes. Al of a sudden the respective vendors get called why the remote copy mechanism doesn’t work anymore. After analyzing the logs it just turns out there has been changes on that WAN link. Now you might argue this is a procedural problem but the matter of fact is these things happen. If it happens on a converged network the consequences will be even more problematic.

    Another example, a manufacturer comes out with new firmware on either side of the FCF or the FCF itself because of defects/bugs, functionality and what have ya. Can this upgrade be done non-disruptively? No, both Cisco NX-OS and Brocade FOS on their respective FCoE platforms do NOT offer non-disruptive upgrades. Now this may change over time but when I query the Cisco bug-list and Brocade defects list I’m 100% sure that when these two technologies are glued together not only will both networks be subject to each others problems but also introduce new problems when there are bugs in the gluing mechanism called FCoE.

    What I would like to see for all the FCoE “Yae sayers” to be available 24×7 an join a customer escalation call. These calls are very nasty if you’re on either side of the fence.

    Anyway, I’ll leave it at that. I hope customers will make the right decision for themselves and accept all risks involved. I know what I would select if my data was valuable to me and you can imagine it’s not FCoE.

    As for Down Under. I’m in Melbourne but my office is in Sydney so let me know if you’re planning coming over. We’ll eat some Multihop Kangaroo. :-)

    Cheers,
    Erwin

       0 likes

    • Hey Erwin,

      I see you’ve been busy. :)

      You’ve got a lot of things here, so forgive me if I don’t catch them all, but I’ll try to answer some of your questions the best I can.

      Complexity. See, one of the things that I like about FCoE is how brilliantly simple it is. You don’t have the overhead of IP addressing or TCP windowing to maintain flow control. You don’t have to deal with the end-to-end congestion notification that proponents of DCB Lossless Bridges say you do. It is a very simple encapsulation of the FC frame without any of the ULP issues that come from non-deterministic systems. I love the elegance in the system that does not emulate the functionality of another to get the same functionality. To me, that’s an exercise in simplicity.

      You bring up your blog here, and I’ve read it, but I’m afraid there are many inaccuracies that undermine your argument. Your insistence that there is no reduction in power is simply demonstrably false. You stated that you can’t find the power but I have seen the data sheets from both ELX and QLGC and have seen that we’re talking sub-10watts for even the highest consumption cards. In fact, they can go as low as sub-6 watts. Comparing 16 NICs and 2 HBAs per server and two CNAs on a watt-per-Gbps bandwidth is a no brainer. Add to that the power differential between ~.1 watt for a TwinAx cable vs. 16 watts for Cat 6a, you’re talking massive power savings that can save customers thousands of dollars (pounds, Euro, etc.) per year on power and cooling. I even wrote about this long before I joined Cisco. Sorry, but your figures are just wrong there.

      Operations.. You state “Lets say the datacentre currently runs a converged network and the networking team want to change from STP to MSTP or RSTP for whatever reason. Can they do that without disruption on the storage side? No they can’t.”

      Again, your assertion is demonstrably wrong. I’ve mentioned elsewhere that you run FCoE separately than LAN, and STP is not run on FCoE VLANs. You run FSPF on FCoE VLANs and as a result Ethernet L2 configuration changes are not affecting the FCoE traffic. On the Nexus 7k you have additional configuration and management separation from the LAN side with the use of Storage VDCs. Again, your assertion is simply wrong. I love ya man, but you’re wrong. :)

      Non-Disruptive Upgrades. Again, I hate to sound like a broken record, but you are wrong here too when you state that Cisco “does not offer non-disruptive upgrades” on their FCoE platforms. The Nexus 7k and MDS FCoE platforms are Director-Class, and will have ISSU capability at FCS.

      You haven’t mentioned it outright, but I”ll go one step further. What about management? What about the SAN admins and the LAN admins having the potential to mess up each others’ configs? The Cisco method, as indicated in this post, is to ensure that this does not happen. Not only does Cisco implement Role-Based Access Controls (RBACs) but the environments are completely separate from the Nexus 7000 side with the use of Storage VDCs. Because the OS is the same across the N5k/N7k/MDS Storage Admins have consistent tools to manage the SANs across the platforms seamlessly.

      If customers want additional separation for operational control, they should look at Data Center Network Manager (DCNM) – for both the LAN side and the SAN side. Role-based access controls, complete visibility end-to-end for both (or either) side of the Data Center, and automatic topology building. FCoE wizards for making life easier to provision and deploy, not to mention hooks into third-party APIs. Seriously, you should look at this stuff. It’s incredible how straight-forward and easy it is to get almost any information and access to tools that you need using these packages.

      Nevertheless, and once again I will state it outright: I agree with you that this is not applicable to all customers for all situations. But I will tell you that those who have been working through the early field trials of Multihop FCoE are loving it. They love the fact that it’s much easier to configure and troubleshoot, not more difficult as you allege. They’re loving the fact that they’re getting the power savings that you claim doesn’t exist, but their electricity bills show otherwise. And this is even before all the features were included.

      Thing is Erwin, you’ve been very vocal about your distrust of FCoE but as far as I can tell it’s because you’ve been going through thought experiments for the most part. Sure, there will be troubleshooting cases and issues that need to be resolved because no customer’s Data Center is “textbook.” But when you start making claims that are just flat-out disprovable, I’m not sure that this helps people make a well-informed decision.

      Each customer should be able to make their decision based upon their needs and a full disclosure of the choice, options, limitations and consequences of those needs, just like Phillip who posted here. What I want is for customers to be able to make the decision with all the facts, and if that decision happens to be iSCSI for block, or NFS for NAS, go for it. The last thing I want is for someone to implement a solution and then have buyers remorse because they were promised something that didn’t work for them.

      It just so happens that I think that for many customers – and I’ve been talking to many many customers lately – FCoE can very well be a long-term solution for the evolving Data Center.

      J

      ps. “Multihop Kangaroo.” That’s funny. :)

         0 likes

      • Hi J.

        :-) You’ve been busy too. How was Vienna?

        I just read some release notes from NX-OS and many upgrades are disruptive irrespective of switch class. Don’t worry, Cisco isn’t the only one. :-)

        I just mentioned STP as an example. Cisco may have a different implementation how they handle the FCoE vlans. The point I was trying to make was if a network admin needs to make changes affecting the underlying Ethernet stack all traffic will be disruptive. If the CEE maps need to change (for whatever reason) how do you handle that? Even on the FCoE side you need a new login to obtain the new map. As far as know this is not dynamic.

        The power and cooling example was a Brocade example. I haven’t gone through the cisco specs between an mds and a nexus so these might differ. I’ll have a look into that.

        I still don’t agree with you around complexity. The bare bone FCoE stack may be simple but the usual complexity from both the normal TCPIP LAN and Storage FC do exist. You can’t deny that when you glue these together you get a combined complexity which needs multiple persons with different expertise to look at both from a configuration as well as troubleshooting perspective. This also propagates to everything in the organization from change control, operations, maintenance etc.

        As for stability I’ve become aware of 2 occasions where network admins have made changes in a converged network which caused an outage on both FCoE and non-fcoe environments. Numerous man-hours had to be spent on data recovery and integrity verifications. Sorry, I can’t disclose any details as you can imagine.

        Your comment: “The last thing I want is for someone to implement a solution and then have buyers remorse because they were promised something that didn’t work for them.” I do fully agree.

        Anyway, I think we’ve both made our point and we have our different opinions. I think the customer should decide for himself.

        Until the next #storagebeers.

        Cheers,
        Erwin

           0 likes

  8. Dmitri Kalintsev

    Erwin,

    > the networking team want to change from STP to MSTP or RSTP for whatever reason. Can they do that without disruption on the storage side? No they can’t.

    Please excuse my ignorance, but isn’t FCoE VLAN excluded from the xSTP management domain? I thought in true multihop FCoE set-up the FCoE VLAN is used in strictly point to point between switches with FCF, and all path and topology management for FCoE traffic is taken care of by the FC (FSPF).

    J, any comments?

    Cheers,
    – Dmitri

       0 likes

  9. Your insistence that there is no reduction in power is simply demonstrably false. You stated that you can’t find the power but I have seen the data sheets from both ELX and QLGC and have seen that we’re talking sub-10watts for even the highest consumption cards.

    You evangelize FCoE whilst I see on a daily basis all troubles that customers have with fibre channel only even without the additional complexity of FCoE. If we would add the FCoE part I’m very certain this will increase.

       0 likes

    • I have always said that FCoE (and FC for that matter) were not applicable in all situations. I’m having trouble, however, tying your comment to something specific that I’ve written or said in person, so I’m afraid I don’t know what you are referring to specifically. Thanks for taking the time to read my humble little post, however. I hope you’ll continue to visit.

         0 likes

      • I agree with you, FCoE is not applicable in this case.

        I work for ovh, a french hosting company in france and europe and we work on that subject too.

        your blog is really interesting, but i have some difficulties to read it because of my low level of english.

           0 likes

  10. Which is the best network between Converged Networks and Mono networks? Any one told me? I will be great full for correct answer.

       2 likes

    • I’m afraid there is no easy answer to your question. Networks should be designed to solve particular needs and problems. There is no “one-size fits all” approach that can allow us to simply point and say, “That one. Use that one.”

         0 likes

Trackbacks and Pingbacks:

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home