It may sound strange to hear me say it, but when I wrote the previous blog post about Dynamic FCoE I thought that it may get a little blip of attention and then filed away as a “oh, that is cool” little factoid about Cisco’s storage portfolio. Perhaps I shouldn’t have been so nonchalant, but I confess I was not expecting the number of questions that I (and other speakers at CiscoLive back in May) have been getting about the technology.
Many questions – including some in the comments of the previous blog – have indicated a strong desire to know more, and they have been excellent and well-thought out. I’m going to try to address some of them in a deeper dive blog whenever I can, in the hopes of being able to address some of the concerns and clarify some points.
We’ll start with one of the biggest concerns – sharing the spine layer for logical separation of SAN A/B, and what happens if one of the spine switches (nodes) go offline. Read More »
Tags: Clos architectures, Converged I/O, Dynamic FCoE, FabricPath, FCoE, Fibre Channel, Load Balancing, Nexus 5500, Nexus 5600, Nexus 6000, Storage, Storage Networking
Note: This is the second of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 1 | Part 3 ]
EMC World was wonderful. It was gratifying to meet industry professionals, listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition. Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.
Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies. You can listen to that here.
So let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.
MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.
We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. You can see white paper on Next generation Cisco MDS here. At part of writing this white paper ESG tested the Fabric Card redundancy (Page 9) in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.
The most important aspect for all of this is for customer is to be educated.
Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.
We will continue our discussion for the Flexibility aspect of MDS 9710.
-We are what we repeatedly do. Excellence, then, is not an act, but a habit (Aristotle)
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
This week has been the semi-annual OpenStack Summit in Atlanta, GA. In a rare occurrence I’ve been able to be here as an attendee, which has given me wide insight into a world of Open Source development I rarely get to see outside of some interpersonal conversations with DevOps people. (If you’re not sure what OpenStack is, or what the difference is between it and OpenFlow, OpenDaylight, etc., you may want to read an earlier blog I wrote that explains it in plain English).
On the first day of the conference there was an “Ask the Experts” session based upon storage. Since i’ve been trying to work my way into this world of Programmability via my experience with storage and storage networking, I figured it would be an excellent place to start. Also, it was the first session of the conference.
During the course of the Q&A, John Griffith, the Program Technical Lead (PTL) of the Cinder project (Cinder is the name of the core project within OpenStack that deals with block storage) happened to mention that he believed that Cinder represented software-defined storage as a practical application of the concept.
I’m afraid I have to respectfully disagree. At least, I would hesitate to give it that kind of association yet. Read More »
Tags: open source, OpenStack, programmability, SDN, SDS, Storage, storage networks
Note: This is the first of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 2 | Part 3 ]
Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
- Performance – 24 Tbps Switching capacity
- Reliability – Redundancy for every critical component in the chassis including Fabric Card
- Flexibility – Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
Read More »
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, nexus, NX-OS, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
We’re back with an all-new season of Engineers Unplugged–more unicorns, more technology, and more selfies than even the Oscars have to offer.
Season 5 kicks off with a bang–role-based access control and policy management discussion brought to you by Nick Howell (@that1guynick) and Joe Onisick (@jonisick). What are the implications for hybrid cloud? What are the predictions for network and storage? How is this related to ACI?
Watch and learn:
Those are some well-maned unicorns, more hair than substance.
- Unicorns with lovely manes courtesy of Nick Howell and Joe Onisick
**The next shoot is at Varrow Madness, Charlotte, NC, March 20, 2014! Contact me now to become internet famous.**
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
- Episodes will publish weekly (or as close to it as we can manage)
- Subscribe to the podcast here: engineersunplugged.com
- Follow the #engineersunplugged conversation on Twitter
- Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
- Practice drawing unicorns
Join the behind the scenes by liking Engineers Unplugged on Facebook.
Tags: ACI, cloud, Hybrid Cloud, management, netapp, policy based control, role based control, Storage