Cisco Blogs

A lightbulb and a SAN Director – Can you tell the difference?

In the last two of days I have received more than a couple of emails (mostly from Cisco technical sales people) about some testing that our main storage networking competitor has done to show how the power consumption of our MDS 9500 director is higher than that of their current generation of products. Everybody was giving me tons of reasons why the comparison was not correct and how real life environment were different than the test set up.I thought about posting a very long explanation of the fallacies of such a test with all of the technical details behind that, but then I realized that there may be an easier way to explain.I am in the midst of a small home remodeling project and I have to make a couple of decisions on lighting for the backyard. Guess what? One of the key decisions is about choosing the light bulbs (it’s a bit complicated as the types of lighting are varied).I had the option of putting multiple small light bulbs in the back yard (they consume less power, don’t they?) but that would be a less desirable choice. The efficiency of many small light bulbs is significantly lower than that of one large light bulb (which has been designed to light a broad space such as my back yard) and to get the same lighting power. To get the same light from multiple small bulbs distributed around the yard I would have had to consume much more power overall. Also, with many small light bulbs distributed around the yard, I would need to pull wires all around the yard to make sure that there is enough light in each corner, which creates another set of issues (installing and protecting the wiring infrastructure, etc). And I could go on forever on why this is wrong, but I think you have got the point by now.I am not going to make the story too long and we can certainly continue this discussion in the post session with more technically accurate arguments, but I hope this example makes the point.Can you tell the difference between a SAN Director and a light bulb now?

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. First of all, I want to apologize if my comment sounded like “wise words” as that was not my original intention at all. I wanted to be succinct and make my point in a way that was simple and clear. I will use this reply to offer more technical insights and I will make sure I limit the use of metaphors.The basic point of my original post is that power consumption is the property of complex systems and depends on the features and functionality that these systems provide. Also, if you look at the individual power consumption of a single component within an end-to-end solution you may be led to make a sub-optimal choice instead of the best one, which will probably be a different one for different situations.If I came up with one SAN design and claimed that as a real-life Data Center environment, I would just create further confusion. In SANs and more generally in Data Centers, there is no one-size-fits-all. If there was one, we would all be buying Data Centers in a box by now (see posting on “Can you stick a Data Center in a box?”) Despite this and in an effort to be more precise about my original posting, let me share with you some key characteristics of today’s SANs that in my opinion make them very different than they used to be five or six years ago:(1) SANs must behave like “true” networksWhen I began to look into SANs back in 2000, I wondered why people called these things networks; their most common use was as multiplexers of large storage arrays or tapes. Very rarely customers would build layered network designs with multiple switches interconnected by ISLs (let alone the idea of interconnecting SANs across geographical distances.)Since then things have changed dramatically and today SANs are much larger; it’s not unusual to design a SAN with few thousand ports and we already have more than a couple of cases where the number of ports in the same SAN is over 10,000. Security is a requirement and not a nice-to-have anymore; you can not rely on physical security anymore, but you have to guarantee multiple levels of protection within the SAN. Scalability in terms of performance, port count, number of logical networks (VSANs) is a primary concern. And these are just some examples.You can see how the behavior of a SAN is closer to that of a LAN or a WAN than it’s ever been. SANs can not just be flat, non-scalable, layer 2 topologies, built with a couple of switches anymore. SANs look more and more like very specialized LANs (i.e. networks,) designed to transport a unique type of traffic. It’s not surprising that a core network switch for LAN architectures, such as the Catalyst 6500, has many common features with the Cisco MDS 9500, a SAN Director. Despite many significant differences in the internal architecture (the MDS 9500 was designed from scratch and it’s not an adaptation of the Catalyst 6500,) they share a sophisticated hardware and software architecture, with a lot of embedded intelligence implemented through advanced ASIC technology and multi-protocol, multi-layer Operating Systems (not only layer 2, but layer 2 to 7.) All of this requires power to be properly done, but it allows you to build a larger, more robust, better integrated network, which in the end leads to higher efficiencies at the solution-level.(2) SANs must be multi-protocolFibre Channel remains king and nothing will displace it (a long topic in itself, as discussed in the posting “Is the Fibre Channel killer born yet?”) Nonetheless, every SAN requires connectivity to other remote SANs, whether these are just buildings away, within metropolitan distances or anywhere across the globe. Integration of optical transport and (more importantly) of IP technology for FCIP is key. There is also iSCSI that completes this picture, and even though the adoption of this technology is not yet significant today, it should not be ignored for the future. The ability to handle multiple protocols in the same architecture uses power (if you look at just one stand-alone switch,) but it pays off when you look at the overall network design as it does not require external appliances and allows for better power savings overall.(3) SANs must be intelligentThere is more and more intelligence moving into the network. 2008 will be the year of storage virtualization (and I know you have heard this before, but let me say it at least once.) The one thing I would add to make the statement a little bit more original is that I believe that 2008 will be the year of network-hosted storage virtualization. Customers who have built their SANs with this in mind will have a tremendous competitive advantage as they will be able to roll out storage virtualization with minimal to no impact to operations and without having to invest millions to forklift upgrade their existing fabrics. If you want just one proof-point of how intelligence is moving into the network, take a look at how successful SANTap based solutions have been in the market so far. Obviously, the ability to embed intelligence in the fabric requires power and it’s not something that you can do by retrofitting a fabric that was simply designed to forward frames; neither you can achieve the same benefits by just adding an external appliance to the network. You must have designed your fabric with this type of scalability in mind as you will gain the most benefits only by enabling full integration of the functionality within the network itself. The interesting thing of network-based storage virtualization is that the incremental power you consume in the network implies efficiency gains not only in comparison with less integrated solutions, but also in terms of overall system requirements as you are now being able to further consolidate and virtualize extremely large and heterogeneous portions of your storage infrastructure (maybe this is a good topic for a separate post.)(4) Investments in SANs must be preservedSANs are an essential part of any modern Data Center, people have had them in productions for almost 10 years and nobody wants to go through forklift upgrades at every technology refresh cycle. Technology evolves and innovation happens, but when you have to pay the full price for it, I am not sure I would still call that “innovation.” One of the things I am very proud of is the fact that most of the chassis and linecards that we shipped in 2003 with the MDS (i.e. the first ones out of the line) are still in production, while newer models have been added to the existing ones.In the future you will see the MDS running 1, 2, 4, and even 8 Gb FibreChannel in a chassis that is several years old without having had to replace the chassis, power supplies, and in many cases even the Supervisor Engine and control plane. Cisco is not new to these sort of best practices when it comes to designing platforms that protect customer investments. On our Catalyst 6500 switches, if we replaced our chassis and line cards at the same pace as our competitors, our annual waste would generate a pile of chassis about 53 miles high.This long-term outlook to environmental issues, corporate social responsibility, and frankly customer investment protection is something that should not go unnoticed.The MDS 9500 Series has been designed with an internal architecture that is very modular and flexible, so that you can get performance, flexibility, scalability, intelligence and ultimately investment protection. If you factor all of these things in, measuring its power efficiency in a stand-alone configuration, while the device is simply forwarding frames, is clearly misleading. The same functionality could easily be achieved by using a fabric switch or a bunch of them, certainly a more power and cost-efficient solution if all you want to do is forward Fibre Channel frames between local ports in a small SAN.In conclusion, I want to make a list of just 10 things that you can get from a Cisco MDS 9500 series director and which use some of the power mentioned above and enable a growing majority of enterprise Data Center users to build SANs that meet the four main requirements mentioned above:(1) Standard ANSI T.11 VSANs implemented per port (no external appliance required)(2) InterVSAN Routing per port (no external appliance required)(3) Integrated multiprotocol support (FC, FCIP, iSCSI) in the same chassis(4) Integrated, in-line, Intelligent Fabric Applications (including SANTap and Storage Virtualization)(5) Redundant Crossbar Access for maximum performance in normal and degraded conditions (maximum performance guaranteed even in case of supervisor failure)(6) Quality of Service per Virtual Output Queue(7) 528 port in a single chassis (with the ability to scale beyond that in the future)(8) Per-port bandwidth management and CRC-check(9) Deterministic low-latency, any-to-any port (true non-blocking architecture)(10) True High-Availability, any-to-any port-channelingI appreciate your patience if you have read so far and I am sorry for those who decided to navigate away while reading the message. I hope this post helps you make the right decision in selecting a storage networking solution for your Data Centers. Comments are, as always, very welcome!”

  2. Mr. Malagrino, Brocade had hoped that Cisco would have been in attendance this week for Storage Decisions Chicago so we could have discussed the methods and results of the director power challenge both real-time and in the public. Brocade is very interested in presenting only true, documented facts, and we specifically do not want to mislead anyone. We maintain our position that we will immediately withdraw any instances where Brocade material is incorrect once a documented, proven reference can be used in place of the documented references and actual test results that we currently use. No sources have ever been provided for the actual numbers from Cisco’s EBC testing, nor has Cisco provided any results, references, or documentation to back the claim that competitive products do not have features typically enabled in a SAN environment to reduce power in solutions. Brocade’s test results from the power challenge actual testing prove that Cisco’s own documentation is within 5% of the actual results: I would like to propose a public broadcast, using Cisco WebEx, where both Cisco and Brocade can present the facts for their respective sides of the argument. I will prioritize and dedicate time at your earliest convenience to make sure both Cisco and Brocade can set the record straight on this topic.Respectfully,Mario BlandiniDirector, MarketingBROCADE”

  3. As you said he comparison was not correct and how real life environment were different than the test set up””why don’t you do a real life testing on cisco and brocade comparison using a “”real life environment”” instead of using light-bulb “”wise-wording”” that is not a “”real life environment”” for SAN.As an outsider, we are waiting the result, so we can make better decision based on “”real life”” testing and “”real life environment”” information.”