Note: This is the second of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 1 | Part 3 ]
EMC World was wonderful. It was gratifying to meet industry professionals, listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition. Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.
Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies. You can listen to that here.
So let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.
MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.
We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. You can see white paper on Next generation Cisco MDS here. At part of writing this white paper ESG tested the Fabric Card redundancy (Page 9) in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.
The most important aspect for all of this is for customer is to be educated.
Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.
We will continue our discussion for the Flexibility aspect of MDS 9710.
-We are what we repeatedly do. Excellence, then, is not an act, but a habit (Aristotle)
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Cisco Live 2014 is fast approaching in few weeks from now.
This is an important year for Cisco Live as well as Fibre Channel (FC) along with Fibre Channel over Ethernet (FCoE) family of products. For Cisco Live : it is celebrating 25th anniversary on its home ground – Bay area, San Francisco. For Storage Market, Next Generation MDS product family lineup with 16G linerate FC and 10G FCoE support has renewed the energy in SAN industry with large customers building Green field Datacenters using new 16G FC and 10G multihop FCoE. This year has seen lot more traction on multihop FCoE; new set of customers now include Aerospace, Financial and Technology solution companies.
More details can be found here under Case studies.
I asked Bhavin Yadav, from the engineering team, to bring his technical expertise and knowledge of the customer’s needs to help us create a catalog of the sessions you don’t want to miss at Cisco Live San Francisco .
“This year at Cisco Live, we have lot more focus and sessions on both SAN technologies – Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE). Once the registration is finished, you can subscribe to the sessions and put it on your calendar as well. The Cisco Live Smart Mobile apps launching on April 28th will also help us drive to the right session using our smart phones.
In April 2013, a little while before last year’s Cisco Live 2013 in Orlando, Storage business unit of Cisco released its next Generation Fibre Channel Director class switch MDS 9710 and Multiservice Fabric Switch platform MDS 9250i. By now, most of us know that MDS 9710 is designed to support 16G linerate FC and 10G FCoE using its FC and FCoE Linecard modules. MDS 9250i is a 2 RU switch that gives us all the flexibility we need in terms of multi-protocol support, whether it is FC / FCoE / FCIP or ISCSI. MDS 9250i has 16G FC Line rate ports with 10G FCoE, 2 x 10G FCIP ports along with iSCSI support as well. This is like a Swiss army knife – you can use it anywhere (backups, storage migration, etc.) for any of the mostly used protocols (FC, FCoE, FCIP, ISCSI) in Fibre channel industry.
This year, we are bringing in more than 20 sessions to the storage track in various flavors, ranging from Learning Storage Fundamentals, Design, Deployment, Operation, Troubleshooting, Best Practice, Migration, etc. Let me highlight some of the important sessions for Storage experts. This will help you quickly identify, reserve your spot and get most out of the Cisco Live 2014 for storage focused technology experts.
Storage specific sessions:
BRKARC-1222 – Cisco MDS9000: expanding the family:
This session presents detailed analyses of the new members of the market leading MDS 9000 family, demonstrating their performance, reliability and flexibility. Topics include architectural design and enhanced capabilities of Cisco MDS 9710 and MDS 9250i, their typical use cases and interoperability with the other MDS 9000 family members as well as Nexus switches. This session is designed for storage engineers involved in FC and FCoE network design and Data Centre storage architecture. An understanding of FC switching technologies and FCoE benefits is assumed.
2 hours Technical Breakout – Presented by Adarsh Viswanathan
BRKSAN-2282 – Operational Models for FCoE Deployments – Best Practices and Examples:
Converging SAN and LAN traffic onto common infrastructure enables customers to realize significant cost efficiencies through reducing power consumption, cooling costs, adapters, cables, and switches. FCoE/Unified I/O also provides additional flexibility through a wire-once model that allows ubiquitous access to block storage from all servers.. This session will help customers determine the FCoE operational model for their organization to successfully share a Converged Network between LAN and SAN teams. Best practices, case studies, and configuration examples will be provided, based on experiences with Cisco customers who have successfully implemented FCoE. The session covers operational management for FCoE deployments on Nexus 5000, Nexus 6000, Nexus 7000, Nexus 7700 and MDS.
90 min Technical Breakout – Presented by Jason Walker and Santiago Freitas
Read More »
Tags: Cisco, FabricPath, FC, FCIP, FCoE, iSCSI, LISP, MDS, mpls, Multihop, nexus, NFS, Storage Networking
Note: This is the first of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 2 | Part 3 ]
Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
- Performance – 24 Tbps Switching capacity
- Reliability – Redundancy for every critical component in the chassis including Fabric Card
- Flexibility – Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
Read More »
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, nexus, NX-OS, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Do you have a need for automated provisioning of your data center? Cisco Prime Data Center Network Manager (DCNM) might just provide that solution.
DCNM is designed to help you efficiently implement, visualize, and manage the Cisco Unified Fabric. The need today in the datacenter is for a comprehensive management platform that delivers visibility as well as control of all elements within the Unified Fabric which in turn significantly simplifies troubleshooting, maintenance and provisioning of the entire fabric in a fast and efficient way.Watch the video below to find out more.
Read More »
Tags: Cisco Dynamic Fabric Automation, Cisco Prime DCNM, MDS, network provisioning, nexus, Unified Data Center, Unified Fabric
A long time ago I got asked to write about how to use Fibre Channel over Ethernet (FCoE) for distance. After all, we were getting the same question over and over:
What is the distance limitation for FCoE?
Now, the short answer for this can be checking out various data sheets for the Nexus 2000, Nexus 5500, Nexus 6000, Nexus 7000, or MDS 9X00 product lines. But it didn’t answer the most obvious follow-up questions: “Why?” and “How?”
Problem is, whenever you start talking about extending your storage connectivity over distance, there are many things to consider, including some things that many storage administrators (or architects) may not always remember to think about. The more I thought about this (and the longer it took to write down the answers), the more I realized that there needed to be a good explanation for how this worked.
Generally speaking, the propeller spins the ‘other way’ when it comes to storage distance.
To that end, I began writing down the things that affect the choice for selecting a distance solution, which involves more than just a storage protocol. And so the story grew. And grew. And then grew some more. And if you’ve ever read any blogs I’ve written on the Cisco site you’ll know I’m not known for my brevity to begin with! So, bookmark this article as a reference instead of general “light reading,” and with luck things will be clearer than when we started. Read More »
Tags: distance, FCIP, FCoE, Fibre Channel, iSCSI, MDS, nexus, Storage