I was sitting in a room with a client the other day and normally in these conference rooms with the mahogany tables and high back leather chairs*, you have Cisco on one side of the table, and the client on the other. However, this wasn’t the case, as the table was formica and the chairs were folding. Also, in the room was two groups that had never spoken before except in rare cases, “The network is down!” or “Our hosts can’t see their storage!” Yes my friends, it was the LAN and SAN folks in the room. The topic of FCoE was in front of us and the question was around their soon to be deployed Nexus 5000 switching infrastructure. The discussion between the two parties over who would manage the Nexus 5000 reminded me of a scene from Ghostbusters… Read More »
I spent two weeks over at the Ask the Expert forums, and I came to the realization that often our customers are bombarded with facts, figures, speeds, feeds, features, buzzwords, comparisons and functionalities for which they’re not sure which ones they must have while others they can live without or are a convenience. So I figured I’d toss out what I think are the top features for building an MDS Storage Area Network. Some may be obvious and others you might shake your head or light up the torches. They’re not in any particular order as your mileage varies from mine. I’ll probably skip those that are obvious like “hot swap power supplies” and other oh so exciting abilities…
The first set I usually refer to as the holy trinity of features as they constitute the foundation of the connectivity… VSANs, Port-Channels and TE Ports. They’ve been around literally forever on the platform and for good reason, they’ve been part of the hardware’s DNA since it’s inception. Additionally, if you walk down the hall to the folks that manage your LAN, you’ll find out that they’re using pretty much the same concepts and features as you (VLANs, Port/Ether-Channels and Trunking or 802.1q). So, if those guys are managing hundreds or thousands of switches and routers, there’s probably something worthwhile here. It’s also a pretty good chance that they are using them for the very same reasons that you are:
- VSANs: Isolation of fault domains.
- Port-Channels: High Availability and load-balancing of InterSwitch Links (ISL)
- TE_Ports: The ability to run multiple VSANs over the same ISL leveraging frames tagged with the VSAN ID and enforced in hardware.
Next on my list is NPV Mode aka N_Port Virtualization. I grew up in the era of 16 port SAN switches and like rabbits, they multiplied, and so did their domains, and don’t get me started on the upgrades… You had top of rack designs that involved dozens of small switches and this tsunami of small switches was slowed down by the emergence of the high density directors with hundreds of ports, first 128 then 256 now over 500. Lots of small switches met their demise..
For months people have been asking me what I’m doing, and it’s been difficult to hold something like this under my hat because I’ve been really excited about this.
How the time has flown. I joined Cisco in June of last year, and on my first day my manager told me that I was going to be working on a new product -- an FCoE blade for the MDS 9500 Series Fibre Channel Directors. Read More »
I was reminded this week of how much perception is driven by perspective. In this case, it was because of our advocacy of FCoE. I was exchanging messages with one individual who interpreted this as an attempt to undermine Fibre Channel (FC) and send it to an early grave. At the same time I was exchanging messages with someone else who felt we should not be wasting out time on FC and be spending more time and effort on IP-based storage. Needless to say, I found the contradiction entertaining, but I thought it might be worthwhile exploring these sentiments a bit.
“Doesn’t Cisco want to get rid of Fibre Channel?”
This one is easy--nothing could be further from the truth. We are committed to FC for the long haul because, simply, our customers are committed to FC. At the end of the day, in the enterprise, FC is still the standard against which other solutions will be judged for performance and availability. Even if customers make the decision to adopt IP-based storage, there is going to be a huge amount of data thats going to stay in the FC domain. It may stay put or be migrated slowly as part of normal refresh, but the end result is that FC is not going away anytime soon. From our perspective, we will continue to invest in FC as long as our customers tell us its important. Lest you doubt that, look at the updates to our Cisco MDS family over the last year and also remember that we still sell gear with Token Ring interfaces.
“Why spend time on Fibre Channel protocols?”
This is a fine question. To paraphrase bank robber Willie Sutton, we’re investing the time in FCoE because that’s where the data is. One of our primary data center design tenets is a unified fabric at the access layer for its TCO and functional benefits. We are agnostic about how you do that, whether its via IP-based storage or FCoE. From a practical perspective, as noted above, for most enterprise customers, their data is sitting in an FC domain, so any convergence strategy needs to take that into account. And while the storage folks may not care what we are doing at the server access layer, they are certainly not looking for their lives to be made any more complicated. Hence, we have FCoE.
At the end of the day, storage strategy shouldn’t be technology-dependent. The next-gen data center is going to need to support the ability of apps to grab data wherever it happens to be sitting: on IP-based storage, FC-based storage, or in a cloud somewhere, which is what we are ultimately helping our customers prepare for.
Selected from hundreds of entries from around the world, Cisco customers King County and Almaviva TSF met the stringent criteria defined by Computerworld, the Storage Networking Industry Association (SNIA), and Storage Networking World (SNW) for awards in the following categories:
1) Best Practices in Energy Efficiency, Green Computing and the Data Center:
King County -- Office of Information Resource Management (OIRM) -- Seattle, Washington
2) Best Practices in Virtualization and Cloud Computing
Almaviva Tele Sistemi Ferroviari (TSF) -- Rome, Italy
About our customers:
King County, the 14th largest county in the United States, used the Nexus platform and MDS switches to build a highly efficient data center shared by all departments. To learn more about how they achieved a green environment, read here.
Almaviva Tele Sistemi Ferroviari (TSF) is one of the leading providers of ICT services to the transport and logistics industries in Italy. Alberto Giaccone, head of network operations at TSF, was present for the awards ceremony. To learn t how TSF transformed its business model deploying Cisco data center best practices, read here.