Superior Platforms, Scale, and Operational Simplicity
Data Center trends like Virtualization, Solid State Drives, DC consolidation and Data Explosion are putting a tremendous amount of strain on the infrastructure. These challenges need targeted and multifaceted approach. It has to be holistic solution to the problems rather than point products for each unique problem. Data Centers require improvements in performance, flexibility, scalability, and reliability and ease of management. To address that Cisco revamped the MDS product line, the journey we started last year when we introduced 9710 and 9250i.
9710 – Director Class Switch with 3x the performance of any director, 384 ports of line rate 16G FC, highest reliability and flexibility.
9250i – Services Appliance supporting 10G FCIP, 16G FC and 10G FCoE in addition to IO Acceleration, Data Mobility Migration in compact 2 RU form factor.
We had a great success with the product line with steepest ramp and amazing customer feedback. Building on the success we have added new members to the product family and extended the innovation to allow for simpler management and scalable deployments.
a) Three New Products
- MDS 9148S – Industries’ most versatile affordable 1RU switch with High-Performance, Easy of Deployment, Enterprise-class features
- MDS 9706 – Unprecedented investment protection with high performance, reliability and multi-protocol flexibility
- High Density line rate 10G FCoE Card – For customers to adopt high density FCoE in incremental non-disruptive fashion on the existing FC footprint without forklift upgrades.
b) New Scalable Deployment Options
- Much Higher Scalability for SAN Infrastructures.
- Dynamic FCoE over Fabric Path
- Data migration enhancements for speed, scale and resiliency
c) New Management Features
- Hardware based FC Congestion Detection and Recovery
- Integration with Industry leading Platforms
- End to End Visibility
- Switch Health Score
With the addition of new members Cisco not only has best of the breed products but also broadest product portfolio. This allows customers to design the SAN precisely to their needs from small departmental SANs to the largest enterprises, from traditional LAN, SAN networks to fully converged fabric and everything in between.
Attend the Webinar on August 12th 8:00 PST to learn more : Register Now
Lets look at the capabilities of each product in little more detail
Cisco MDS 9148S: High-Performance, Easy to Deploy, Enterprise-class Fabric Switch
Versatile: 9148S pay as you grow model allows customers to start from small base and grow . It allows customers to grow from 12 Ports to 24, 36 and finally 48 without any rip and replace. It allows customers to go from 2/4/8G to 16G FC speeds. It is not only the most affordable switch shipping today across all the possible configurations but with 2x the range of ports it allows unparalleled scalability for future growth.
Ease of use: Power On Auto provisioning which allows 9148 and 9148S to automate switch setup. From getting DHCP, to downloading and applying the software to the final configuration is done automatically. Quick configuration wizard allows the box to be configured in an easy way. It shares the same NxOS as rest of the MDS and Nexus products. Power on Auto Provisioning (POAP) is important for large scale data centers where 9148S will be used as Top of the Rack (ToR) switch and distributed throughout the data center. This saves customer to go from box to box with the serial cable and program them individually. It allows for rapid, error free and consistent provisioning.
Enterprise Class switch: It offers the rich Enterprise features like non-disruptive software upgrade,32 Virtual SANs (VSANs), Inter-VSAN Routing (IVR), QOS, PortChannels, N-Port ID Virtualization (NPIV), N-Port Virtualization (NPV), Comprehensive Security in addition to redundant power supplies and fans. Its first of the kind switch in the industry to allow hardware based slow drain detection and recovery. It has back to front airflow.
Customer Use Case: Customer will use the 9148S to design small SAN environments like departmental SANs. Larger Enterprises will use 9148S as ToR Switch for ease of cabling and ease of Management. In addition to that 9148S will be used for BC/DR or remote locations. Pay as you go model is very attractive to customers as it allows them to grow the port count from 12 to 48 without any price penalty as their network demands grow.
Cisco MDS 9706: Extending MDS 9710 Director Qualities to a Smaller Form Factor
It is the highest performance director in the industry. It provides 3X the bandwidth compared to any compact director in the industry. Not only it provide 192 ports line rate performance at 16G but it is designed to provide line rate performance at 32G FC and 40G FCoE when those line cards are introduced without the forklift upgrades using the same type of fabric cards. With 6 fabric cards it provides 1.5Tbps of bandwidth per slot.
In addition to that this is industries first class of directors to offer Redundancy on all critical components including fabric cards. Smaller failure domain, Forward error correction, multi-point CRC checks, predictable and consistent performance for both latency and throughput.
Small to medium enterprises will use 9706 as Middle of the row and end of row switch in addition line rate 16G performance allows it to be used for connectivity to targets in addition to host connectivity. It will be used for both edge core and edge-core-edge designs.
In addition to the pod like deployments where 9RU form factor and 192 ports of line rate at 16G is very attractive.
Some of the specs of the switch are enumerated below
- 1.5 Tbps per slot switching capability
192 ports of 16G FC line rate today with 100% head room to grow to 32G FC) without forklift upgrade
- Industries Highest Reliability
N+1 Fabric redundancy, smaller failure domains, Forward Error Correction, CRC error checks at multiple points, In service software upgrades, Crossbar design with central arbitration and Virtual Output Queuing ensure customers not only get highest availability but also predictable and consistent throughput independent of the traffic profile.
With ability to support both FC and FCoE line cards. With capability to support 2/4/8/10/16G FC and 10G FCoE today and performance to support 32G FC and 40G FCoE on the same footprint.
Industry’s Highest-Density FCoE Module on a FC Director
With 48 ports this has the highest port density and greatest flexibility in the industry. Without any restrictions Cisco customers can now orchestrate FC, FCoE and mixed solutions. FCoE line card afford customers ability to design FC solutions and incrementally deploy FCoE without forklift upgrades and meeting the same features, reliability and availability as afforded by FC.
In addition to hardware we added extensive capabilities to enable small size to cloud scale deployments.
To support large scale out and scale up deployment models we have increased the scalability limits for the SAN infrastructure. The industry leading scalability numbers allow Cisco customer’s unprecedented future proofing and scalability to Scale out or Scale-up. Finally the Data Mobility Migration has 2x the speed and 8x the scale and higher resiliency.
Simplifying SAN Management
In addition to enhanced capabilities in Cisco tools MDS family is integrated with industry standard tools to provide faster configurations like automated zoning. Some of the examples of the tools are UCS Director, EMC ViPR, Microsoft System Center VMM and IBM PpowerVC.
To address complexity in the data-centers Cisco is focused on SAN Management simplification. First and foremost that is top of mind for customers is slow drain. If there are slow draining devices in the network it chokes the entire fabric. These conditions are transient, extremely difficult to isolate, debug and fix. To detect and recover for these conditions Cisco introduced Slow Drain Detection and Recovery in software in previous generation of devices. Now with the new products we have provided the support for these devices to run the slow drain detection and recovery in the hardware rather than waiting for software to come around polling individual ports every 100 ms which is a life time in the data center. As the table below shows with hardware based slow drain the detection speed has increased 100 times and recovery is of the order of nano seconds rather than 100ms.
||Recovery Action Latency – Start and Stop
||MDS 9500MDS 9148
||MDS 9700, MDS 9250i, MDS 9148S
For more info read this whitepaper
In addition to that Data Center Network Manager (DCNM) provides end to end visibility from hosts (virtual or Physical) through switches (MDS or Nexus) into the storage arrays independent of the protocol. DCNM is single pane of glass visibility into the Data Center for both SAN and LAN.
Host Path Redundancy Analysis checks the network every 24 hours or customer designated interval if there is end to end dual paths from Host to the target. It checks against port down situations, VSAN mismatches, VSAN Segmentation, LUN mismatches as well as makes sure both the ports are not on the same line cards. Similar activity that used to take months is now completed on the fly every 24 hours reducing risk and time to repair. Further more administrators are not surprised by an outage as they have complete visibility for the dual paths. Furthermore having both the paths up allows to mitigate any silent failures as well as avoid outages if one of the SAN fails.
Switch health score is another unique capability of DCNM to track switch health over time. It allows customer to quickly determine level of risk, isolate and fix the alerts resulting in low health score and track the health of the SAN over time.
As I started the discussion today Data Centers need a holistic approach to solving the challenges of the data center. Customers not only need higher performance, investment protection, lower opex and capex, reliability but also ease of management and tightly integrated end to end solution. The solutions and capabilities I described allows us to solve the challenges faced by data centers not only today for the years to come. We introduced MDS products in 2002 and since then we have introduced industries first innovation, just few examples out of that are enumerated below. We will continue to innovate in this space for the next decade.
Sr. Product Manager, DCBU
“The best time to plant a tree was 20 years ago. The second best time is now”
Tags: 16 Gigabit, 16G FC, 16Gb, 16Gb Fibre Channel, 192 Port, 9148S, 9706, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Note: This is the second of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 1 | Part 3 ]
EMC World was wonderful. It was gratifying to meet industry professionals, listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition. Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.
Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies. You can listen to that here.
So let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.
MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.
We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. You can see white paper on Next generation Cisco MDS here. At part of writing this white paper ESG tested the Fabric Card redundancy (Page 9) in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.
The most important aspect for all of this is for customer is to be educated.
Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.
We will continue our discussion for the Flexibility aspect of MDS 9710.
-We are what we repeatedly do. Excellence, then, is not an act, but a habit (Aristotle)
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Note: This is the first of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 2 | Part 3 ]
Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
- Performance – 24 Tbps Switching capacity
- Reliability – Redundancy for every critical component in the chassis including Fabric Card
- Flexibility – Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
Read More »
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, nexus, NX-OS, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Extensive Message Protocol (XMPP) is an open standard protocol based on XML (Extensible Markup Language). XMPP is designed to transport instant messages (IM) between entities and to detect online presence. It supports authentication of IM application and secure transport of messages over SSL/TLS. In XMPP entities can be bots, physical users, servers, devices or components. It’s really a powerful tool that has great potential for system administrators to add to their toolbox because:
- XMPP is powerful
- XMPP with Python is only 12 lines of code – trust me, it’s easy!
- XMPP only requires a single query for multiple nodes
- Status message can be used to track host presence
The Power of XMPP
For those of you that are not familiar with XMPP, it not only supports one-to-one messaging between entities but it also supports multi-party messaging (which enables an entity to join a chat room for the exchange of messages with several participants). The messages can be text messages embedded in XML format but XML can also be used to send control messages between entities as we will see with the presence stanza in a bit.
XMPP is widely used; Google uses it (for its Hangout application – formerly google chat) and so does Yahoo and MSN. At Cisco, we use Cisco Jabber extensively to communicate internally. The XMPP client function is now integrated in the Cisco Nexus 5000 series with the release 5.2(1)N1(7) and the Nexus 6000 series with the release of 7.0(0)N1(1). XMPP is an integral part of the single console access for Dynamic Fabric Automation (DFA) which is a powerful framework described in my previous blog.
The new Data Center Network Manager (DCNM) 7.0(1) is delivered as an OVA file that can be deployed quickly on an existing VMware-enabled server. Although DCNM comes with a lot of features that simplify the deployment of the Data Center fabric, we can pick and choose any service we want to use independently – which is great since DCNM comes with Cisco Jabber XCP and is license free. If you already have a XMPP service installed (like Openfire or ejabberd), it will not be a problem because everything discussed here is valid on any standard XMPP implementation.
On NX-OS devices, the XMPP feature is activated by configuring ‘feature fabric access’ and is part of the Enhanced L2 license (ENHANCED_LAYER2_PKG). Once activated, the switch becomes a XMPP client that needs to be registered on the server. In order to register it, XMPP requires the use of fully qualified domain names (FQDNs) to identify the domain server. If the switch does not have access to a DNS service, I recommend that you use the switch management network for messaging and a static host–to–IP address mapping in the switch configuration.
The switch will use its hostname to login to the XMPP service. If your XMPP server does not support auto-registration, you will need to register the switch and the rooms in the XMPP database beforehand. The DCNM OVA requires users and groups to be created via the CLI, and example of this user and group creation is:
[root@dcnm-ova ~]# appmgr add_user xmpp -u leaf0 -p cisco123
[root@dcnm-ova ~]# appmgr add_user xmpp -u leaf1 -p cisco123
User added. Read More »
Tags: Cisco Data Center Fabric, Cisco Nexus, DCNM, instant messaging (IM), NX-OS, open standard protocol, XML, XMPP, xmpp with python
What is the new Nexus 5600?
We at Cisco are really excited to introduce the new Cisco Nexus 5600 platform! It is the third generation of industry’s leading Data Center Server-Access Nexus 5000 series of switches. Cisco Nexus 5600 is the successor of industry’s most widely adopted Cisco Nexus 5500 series switches (with over 20,000 customers and 25 million ports shipped) that maintain all the existing Nexus 5500 features such as LAN/SAN convergence, Fabric Extenders (FEX) and Fabric Path.
The new Nexus 5600 was unveiled at CiscoLive Milan in January 2014 with quite a bit of interest
Nuts and Bolts
We are introducing 2 models under the 5600 platform:
Cisco Nexus 5672UP – A 1 RU 10/40G Ethernet switch offering wire-speed performance for up to 48 10G Ethernet ports (16 of which are Unified Ports) and 6 true 40G ports.
Cisco Nexus 56128P – A 2 RU 10/40G Ethernet switch offering wire-speed performance for up to 96 10G Ethernet ports (48 of which are Unified Ports) and 8 true 40G ports..
In addition to the existing features of the Nexus 5000, 5600 platform brings new features such as True 40 GE support, VXLAN bridging and routing and Cisco Dynamic Fabric Automation (DFA) innovation. With a latency of about 1 µsec, the 5600 platform is ideal for applications which need low latency. For those of you who need network programmability, the Nexus 5600 supports Cisco OnePK and Openflow.
Why these new features matter
Extensibility with VXLAN support
The Cisco Nexus 5600 with its VXLAN support is very well suited for multi-tenant cloud deployments. In large scale, multi-tenant cloud deployments, there is a need for VMs to migrate across layer-3 boundaries. Traditional VLANs only support about 4000 VLANs which are insufficient in the deployments of thousands of VMs. With the migration need across layer-3 boundaries complexities of layer-3 routers are introduced. To solve the scalability as well as the migration issues VXLAN was developed. For more details on VXLAN, watch the video: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/vidoe_fundamentals_vxlan.html
Ease of management with Cisco Dynamic Fabric Automation (DFA)
Our customers are also faced with complex, manual network configurations and have a hard time keeping up with application requirements. To solve these challenges, Cisco has developed an architecture called Dynamic Fabric Automation (DFA) which simplifies management and automation – such as automatic device and fabric configuration, automatic VM deployment, migration and seamless integration of bare-metal and virtualized resources in the data center. The Cisco Nexus 5600 platform with DFA implemented in the hardware as well as software is ideal for the multi-tenant and mixed (physical and virtual) cloud infrastructure.
For more information on DFA, please visit: http://www.cisco.com/en/US/solutions/ns340/ns517/ns224/ns945/dynamic_fabric_automation.html
True 40G support
The difference between a 40G and True 40G port is that you can push an entire 40G flow in the True 40G port and in the normal 40G port, the port really is divided into 4 10G ports via Etherchannels, Thus in the True 40G, you get full 40G bandwidth. The Cisco Nexus 5600 platform switches has True 40G ports, which help in servicing the full 40G flows.
The Big Picture
Cisco has one of the most comprehensive portfolios for the Data Center and Cloud Networking and this Nexus 5600 platform is but one of the pieces of this portfolio. You may want to read this excellent blog, which explains Cisco’s Data Center and Cloud Networking portfolio.
Tags: Cisco DFA, Cisco Nexus 5600, DCNM, Nexus 5000, Nexus 5600, Nexus 56128P, Nexus 5672UP, Nexus 6000, NX-OS, switch, Unified Fabric, Unified Ports, virtualization, VXLAN