Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
Performance - 24 Tbps Switching capacity
Reliability -- Redundancy for every critical component in the chassis including Fabric Card
Flexibility -- Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds. Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD). All of these megatrends demand new solutions in the SAN market. To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch. These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!
We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.
For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing. In other words, we bring 16 Gigabit FC and beyond to our customers:
A NEW BENCHMARK FOR PERFORMANCE
We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.
The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:
1.536 Tbps per slot for Fibre Channel – 24 Tbps per chassis capacity
Only 3 fabric cards are required to support full 16G line rate capacity
Supports up to 384 Line Rate 16G FC or 10G FCoE ports
So there is room for growth for higher throughput in the future …without forklift upgrades
This is more thanthree times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto in our customers’ data centers with a big increase in VM density. They are also moving towards world of many clouds. And then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increase in the adoption of solid-state drives (SSD). Several of our customers are also either consolidating their data centers or forming mega data centers. All of these mega trends certainly come with increasing challenges for the Storage Administrator as the storage network is becoming more critical as it is the strategic asset in the Data Centers.
Take a look at this short video with Richard Darnielle (Director of Product Management for MDS Product lines) and me. Richard shares his insights on the mega trends that will shape the next-generation storage networks.
Guess what? Once again Cisco is here to help you on your journey to addressing these mega trends by raising the bar for storage networks. How you ask?
Cloud computing is part of the journey to deliver IT as a Service which enables IT to change from a cost center to a business strategic partner. Forrester Research recently published a report that concluded, “Cloud computing is ready for the enterprise… but many enterprises aren’t ready for the cloud.”1 Yet Cloud deployments are happening – and I mean all types of Clouds – Private, Public and Hybrid. In other words, we have entered the World of Many Clouds.
Network touches everything and is a key building block for agile and scalable virtualized and Cloud-based data centers. Yesterday, I have introduced our new Nexus 6000 series and new 40 GE extensions to Nexus 5500 and 2000 Series. Today, I would like to introduce the very first services module for the Nexus 7000 Series.
The evolution of the applications environment is creating new demands on IT and in the data center. Broad adoption of scale-out application architectures (i.e. big data), workload virtualization and cloud deployments are demanding greater scalability across the fabric. The increase in east/west (i.e. server-to-server) traffic along with the higher adoption of 10GbE in the server access layer is driving higher bandwidth requirements in the upstream links.
Following up on the introduction of 40GE/100GE on the Nexus 7000 Series, today we unveil the new Nexus 6000 Series, expanding Cisco’s Unified Fabric data center switching portfolio in order to provide greater deployment flexibility through higher density and scalability in an energy efficient form factor.
The Cisco Nexus 6000 Series is industry’s highest density full-featured Layer 2 / Layer 3 40 Gigabit data center fixed switch with Ethernet and Fiber Channel over Ethernet (FCoE) – an industry first!In addition to high scalability, Nexus 6000 Series offers operational efficiency, superior visibility and agility.
Some say “Nexus 6000 Series is a red carpet platform that will turn heads”. We agree! It’s because of …