Cisco Blogs


Cisco Blog > Data Center and Cloud

Next Generation Data Center Design With MDS 9710 – Part II

EMC World was wonderful. It was gratifying to meet industry professionals,  listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition.  Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of  interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.

Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies. You can listen to that here.

Three Pillars of ReliabilitySo let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.

 

Reliability examples in MDS

 

Picture6

MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.

 

Weakest link

 

We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability  has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC  or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. You can see white paper on Next generation Cisco MDS here. At part of writing this white paper ESG tested the Fabric Card redundancy (Page 9) in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.

 

Geschäftsmann hat Wut, Frust und Ärger im Büro

 

The most important aspect for all of this is for customer is to be educated.

Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.

We will continue our discussion for the Flexibility aspect of MDS 9710.

 

 

-We are what we repeatedly do. Excellence, then, is not an act, but a habit (Aristotle)

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Next Generation Data Center Design With MDS 9710 – Part I

 

High Speed (16Gbps) and High Capacity (384 Line Rate ports per Chassis)

Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation  and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.

MDS 9700 provides unprecedented

  • Performance - 24 Tbps Switching capacity
  • Reliability -- Redundancy for every critical component in the chassis including Fabric Card
  • Flexibility -- Speed, Protocol, DC Architecture

In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.

In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710.  We will review one aspect of the DC design requirements in each.  Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend

Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Cloud Service Provider deploys End-to-End FCoE

November 13, 2013 at 10:50 am PST

In one of my earlier blogs, -- “How to get more SAN mileage….” -- I had highlighted how one can deploy End-to-End FCoE using a converged Director-class platform, like Nexus 7000, connected directly from the converged access switch, like UCS FI, in order to get the utmost agility. Well, this is how ITOCHU Techno-Solutions Corporation (CTC), a Cloud Service provider, deployed its network to get significantly higher mileage.

CTCCTC provides a wide range of IT services for business customers in Japan. The company’s Cloud Platform Group recently launched its innovative ElasticCUVIC shared private cloud service, which helps customers reduce infrastructure cost and management complexity. With large numbers of VMs, CTC wanted to simplify its data center architecture and IT management while optimizing scalability.  The challenge was to deliver high-performance, easy-to-manage cloud services at scale.

The company evaluated several storage networking solutions and turned to Cisco for Fibre Channel over Ethernet (FCoE) solutions, which greatly simplify the infrastructure and management. CTC built its two newest data centers in Yokohama and Kobe with ultra-high performance and flexibility in mind. CTC implemented an End-to-End FCoE architecture using Cisco Nexus 7000 Series Switches, Cisco UCS servers, and FCoE connections between the switches, servers, and FCoE storage arrays.

CTC-Deploy

With the converged FCoE architecture, ElasticCUVIC is enabling CTC customers to gain Read More »

Tags: , , , , , ,

How to get more SAN mileage out of UCS FI?

October 15, 2013 at 12:20 pm PST

 

Image Credit: Wikispeed.org

Mileage (miles per gallon) is one of the important criteria while buying any automobile and once bought, it is highly desirable to hit the maximum advertised mileage without significantly changing the driving habits or the routes (highway vs city mpg). Well, I have not been able to achieve that yet, so being a geek, I focused my attention on a different form of mileage (throughput per switch-port) that interests me at work. So in this blog, I would explore a way to get more SAN mileage from the Cisco UCS FI (Fabric Interconnect) without significantly affecting the SAN admin’s day-to-day operations.

Context:

Just a bit of background before we delve into the details -- The I/O fabric between the UCS FI and the UCS Blade Server Chassis is a converged fabric, running FCoE. The usage of FCoE within the UCS fabric is completely transparent to the host operating system, and any Fibre Channel block storage traffic traverses this fabric as the FCoE traffic. So, a large number of over 20,000+ UCS customers, using Block Storage, are already using FCoE at the access layer of the network.

Choices:

Now, the key question is what technology, FC or FCoE, to use northbound on the FI uplink ports to connect to an upstream Core switch for the SAN connectivity. So, what are the uplink options? Well, the FI has Unified ports and the choice is using the same uplink port as either 8G FC -or- 10G FCoE. [Note that when using the FCoE uplink, it is not a requirement to use a converged link and one can still use a dedicated FCoE link for carrying pure SAN traffic].

Observations:

1)    Bandwidth for Core Links: This is a very important aspect for the core part of the network. It is interesting to note that 10G FCoE provides almost 50% more throughput than the 8G FC. This is because FC has a different bit encoding and clock-rate than Ethernet, and so 8G FC yields 6.8G throughput while 10G FCoE yields close to 10G throughput (post 1-2% Ethernet frame overhead)

10G-FCoE-Uplink

FCoE-is-FC

2)   Consistent Management ModelFCoE is FC technology with same management and security model, so it will be a seamless transition for a SAN admin to move from FC to FCoE with very minimal change in the day-to-day operations. Moreover, this FCoE link is carrying dedicated SAN traffic without requiring any convergence of LAN traffic. To add to that, if the UCS FI is running in the NPV mode, then technically the FCoE link between the UCS FI and the upstream SAN switch does not constitute a Multi-Hop FCoE design, as the UCS FI is not consuming a Domain-ID, and the bulk of SAN configurations like zoning etc. need to happen on only the Core SAN switch, thus maintaining the same consistent SAN operational model as with just the FC.

3)    Investment Protection with Multi-protocol flexibility: By choosing FCoE uplink from the converged access layer, one can still continue to use the upstream MDS core SAN Director switch as-is, providing the connectivity to existing FC Storage arrays. Note that Cisco MDS 9000 SAN Director offers Multi-protocol flexibility so that one can Interconnect FCoE SANs on the Server-side with the FC SANs on the Storage-side.

And, we have a winner… Read More »

Tags: , , , , , ,

Introducing MDS 9710 Multilayer Director and MDS 9250i Multiservice Switch – Raising the Bar for Storage Networks

The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds.  Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD).   All of these megatrends demand new solutions in the SAN market.  To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch.  These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!

We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.  

For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing.  In other words, we bring 16 Gigabit FC and beyond to our customers:

A NEW BENCHMARK FOR PERFORMANCE

We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.

The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:

  • 1.536 Tbps per slot for Fibre Channel   – 24 Tbps per chassis capacity
  • Only 3 fabric cards are required to support full 16G line rate capacity
  • Supports up to 384 Line Rate 16G FC or 10G FCoE ports
  • So there is room for growth for higher throughput in the future …without forklift upgrades

This is more than three times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!

Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,