Cisco Blogs


Cisco Blog > Data Center and Cloud

Why Upgrade to MDS 9700

MDS 9500 family has supported customers for more than a decade helping them  through FC speed transitions from 1G, 2G, 4G, 8G and 8G advanced without forklift upgrades. But as we look in the future the MDS 9700 makes more sense for a lot of data center designs.  Top four reasons for customers to upgrade are

  1. End of Support Milestones
  2. Storage Consolidation
  3. Improved Capabilities
  4. Foundation for Future Growth

So lets look at each in some detail.

  1. End of Support Milestones

MDS 4G parts are going End of Support on Feb 28th 2015. Impacted part numbers are DS-X9112, DS-X9124, DS-X9148. You can use the MDS 9500 Advance 8G Cards or MDS 9700 based design. Few advantages MDS 9700 offers over any other existing options are

a. Investment Protection -- For any new Data Center design based on MDS 9700 will have much longer life than MDS 9500 product family. This will avoid EOL concerns or upgrades in near future. Thus any MDS 9700 based design will provide strong investment protection and will also ensure that the architecture is relevant for evolving data center needs for more than a decade.

b. EOL Planning -- With MDS 9700 based design you control when you need to add any additional blades but with MDS 9500, you will have to either fill up the chassis within 6 months (End of life announcement to End of Sales) or leave the slots empty forever after End of Sale date.

c. Simplify Design - MDS 9700 will allow single skew, S/W version, consistent design across the whole fabric which will simplify the management. MDS 9700 massive performance allows for consolidation and thus reducing footprint and management burden.

d. Rich Feature Set - Finally as we will see later MDS provides host of features and capabilities above and beyond MDS 9500 and that enhancement list will continue to grow.

 Tech Refresh Example v1

  1. Storage Consolidation

MDS 9700 provides unprecedented consolidation compared to the existing solutions in the industry. As an example with MDS 9710 customers can use the 16G Line Rate ports to support massively virtualized workload and consolidate the server install base. Secondly with 9148S as Top of Rack switch and MDS 9700 at Core, you can design massively scalable networks supporting consistent latency and 16G throughput independent of the number of links and traffic profile and will allow customers to Scale Up or Scale Out much more easily than legacy based designs or any other architecture in the industry.

Moreover as shown in figure above for customers with MDS 9500 based designs MDS 9710 offers higher number of line rate ports in smaller footprint and much more economical way to design SANs. It also enables consolidation with higher performance as well as much higher availability.

Small SANs to Large SAN Design v1

 

  1. Improved Capabilities

MDS 9700 design provides more enhanced capabilities above and beyond MDS 9500 and many more capabilities will be added in future. Some examples that are top of mind are detailed below

Availability: MDS 9700 based design improves the reliability due to enhancements on many fronts as well as simplifying the overall architecture and management.

    • MDS 9710 introduced host of features to improve reliability like industry’s first N+1 Fabric redundancy, smaller failure domains and hardware based slow drain detection and recovery.
    • Its well understood that reliability of any network comes from proper design, regular maintenance and support. It is imperative that Data Center is on the recommended releases and supported hardware. As an example data center outage where there are unsupported hardware or software version failure are exponentially more catastrophic as the time to fix those issues means new procurement and live insertion with no change management window. Cost of an outage in an Data Center is extremely high so it is important to keep the fabric upgraded and on the latest release with all supported components. Thus for new designs it makes sense that it is based on the latest MDS 9700 directors, as an example, rather than MDS 9513 Gen-2 line cards because they will fall of the support on Feb 28, 2015. Also a lot of times having different versions of the hardware and different software versions add complexity to the maintenance and upkeep and thus has a direct impact on the availability of the network as well as operational complexity.

Throughput:

With massive amounts of virtualization the user impact is much higher for any downtime or even performance degradation. Similarly with the data center consolidation and higher speeds available in the edge to core connectivity more and more host edge ports are connected through the same core switches and thus higher number of apps are dependent on consistent end to end performance to provide reliable user experience. MDS 9700 provides industries highest performance with 24Tbps switching capability. The Director class switch is based on Crossbar architecture with Central Arbitration and Virtual Output Queuing which ensures consistent line rate 16G throughput independent of the traffic profile with all 384 ports operating at 16G speeds and without using crutches like local switching (muck akin to emulating independent fixed fabric switches within a director), oversubscription (can cause intermittent performance issues) or bandwidth allocation.

Latency:

MDS Directors are store and forward switches this is needed as it makes sure that corrupted frames are not traversing everywhere in the network and end devices don’t waste precious CPU cycles dealing with corrupted traffic. This additional latency hit is OK as it protects end devices and preserves integrity of the whole fabric. Since all the ports are line rate and customers don’t have to use local switching. This again adds a small latency but results in flexible scalable design which is resilient and doesn’t breakdown in future. These 2 basic design requirements result in a latency number that is slightly higher but results in scalable design and guarantees predictable performance in any traffic profile and provides much higher fabric resiliency .

Consistent Latency: For MDS directors latency is same for the 16G flow to when there are 384 16G flows going through the system. Crossbar based switch design, Central arbitration and Virtual Output Queuing guarantees that. Having a variable latency which goes from few us to a high number is extremely dangerous. So first thing you need to make sure is that director could provide consistent and predictable latency.

End to End latency: Performance of any application or solution is dependent on end to end latency. Just focusing on SAN fabric alone is myopic as major portion of the latency is contributed by end devices. As an example spinning targets latency is of the order of ms. In this design few us is orders of magnitude less and hence not even observable. With SSD the latency is of the order of 100 to 200 us. Assuming 150 us the contribution of SAN fabric for edge core is still less than 10%. Majority (90%) of the latency is end devices and saving couple of us in SAN Fabric will hardly impact the overall application performance but the architectural advantage of CRC based error drops and scalable fabric design will make provided reliable operations and scalable design.

Scalability:

For larger Enterprises scalability has been a challenge due to massive amount of host virtualization. As more and more VMs are logging into the fabric the requirement from the fabric to support higher flogins, Zones. Domains is increasing. MDS 9700 has industries highest scalability numbers as its powered by supervisor that has 4 times the memory and compute capability of the predecessor. This translates to support for higher scalability and at the same time provides room for future growth.

MDS 9700 Capabilities v2

Foundation for Future Growth:

MDS 9700 provides a strong foundation to meet the performance and scalability needs for the Data Center requirements but the massive switching capability and compute and memory will cover your needs for more than a decade.

    • It will allow you to go to 32G FC speeds without forklift upgrade or changing Fabric Cards (rather you will need 3 more of the same Fabric card to get line rate throughput through all the 384 ports on MDS 9710 (and 192 on MDS 9706).
    • MDS 9700 allow customers to deploy 10G FCoE solution today and upgrade without forklift upgrade again to 40G FCoE.
    • MDS 9700 is again unique such that customers can mix and match FC and FCoE line cards any way they want without any limitations or constraints.

Most importantly customers don’t have to make FC vs FCoE decision. Whether you want to continue with FC and have plans for 32G FC or beyond or if you are looking to converge two networks into single network tomorrow or few years down the road MDS 9700 will provide consistent capabilities in both architectures.

Foundation for Future Growth v1

In summary SAN Directors are critical element of any Data Center. Going back in time the basic reason for having a separate SAN was to provide unprecedented performance, reliability and high availability. Data Center design architecture has to keep up with the requirements of new generation of application, virtualization of even the highest performance apps like databases, new design requirements introduced by solutions like VDI, ever increasing Solid State drive usage, and device proliferation. At the same time when networks are getting increasingly complex the basic necessity is to simplify the configuration, provisioning, resource management and upkeep. These are exact design paradigms that MDS 9700 is designed to solve more elegantly than any existing solution.

Although I am biased in saying that but it seems that you have voted us with your acceptance. Please see some more details here.

 

 

Live as if you were to die tomorrow. Learn as if you were to live forever.
Mahatma Gandhi

 

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Innovate More with Confidence in Your Cloud

Guest post from Dan Swart

Dan Swart is a Senior Manager in Cisco Technical Services Product Management, leading the team responsible for Enterprise and Data Center Solution Support services. Along with that, Dan has been heavily involved in Data Center Alliance programs and Converged Infrastructures. Dan has Batchelor of Science Degrees in Zoology and Electrical Engineering from North Carolina State University.

Dan Swart is a Senior Manager in Cisco Technical Services Product Management, leading the team responsible for Enterprise and Data Center Solution Support services. Along with that, Dan has    been heavily involved in Data Center Alliance programs and Converged Infrastructures. Dan has Bachelor of Science Degrees in Zoology and Electrical Engineering from North Carolina State University.

In my last blog post, Complexity and Control in the Cloud, I covered some basic considerations as you navigate vendors and solutions when planning your enterprise cloud.

Unsurprisingly, when Cisco is talking to customers about their private cloud needs and our data center solutions, customers very quickly sound this panic button …

Read More »

Tags: , , , ,

Video Demo: The Power of ACI Physical Network Visibility in an SDN Overlay Environment

December 22, 2014 at 5:00 am PST

[Note: Register today for our upcoming live ACI webcast: “Is Your Data Center Ready for the Application Economy”, January 13, 2015, 9 AM PT, Noon ET, featuring ACI customers and several key ACI technology partners.]

At the most recent Gartner Data Center Conference in Las Vegas, after some insightful discussions with customers and analysts, we came up with a great demo idea and proof point that highlights a key feature in our Application Centric Infrastructure (ACI) platform. This particular demo centers on the unique visibility of the ACI Fabric to faults in the underlying physical network.

Joe Onisick, Principal Engineer in the ACI team at Cisco, compares this ability in ACI to SDN technologies that employ only virtual overlay networks in the following video. With overlay networks, such as a VXLAN tunnel, the resulting virtual network (and all the management and analytics tools) has a much harder time isolating faults within the physical infrastructure. The overlay is designed to “tunnel” through the physical network, simplifying and obscuring the physical topology and issues with any specific network node. Before going much further, I’ll let Joe provide the details in this quick, 3 minute video:

Read More »

Tags: , , , ,

Cisco UCS Delivers the highest TPC-H result for non-clustered systems at the 1000-GB scale factor with Microsoft SQL Server

The Cisco UCS® C460 M4 Rack Server continues its tradition of Industry leadership with the new announcement of the best non-clustered TPC-H benchmark result at the 1000GB scale factor, in concert with Microsoft SQL Server 2014 Enterprise Edition.

The Cisco UCS® C460 M4 Rack Server captured the number-one spot on the TPC-H benchmark at the 1000GB scale factor with a price/performance ratio of $0.97 USD per QphH@1000GBand demonstrated 588,831 queries per hour (QphH@1000GB), beating results from Dell, Fujitsu, and IBM.

The TPC-H benchmark evaluates a composite performance metric (QphH@size) and a price-to-performance metric ($/ QphH@size) that measure the performance of various decision-support systems by running sets of queries against a standard database under controlled conditions. For the benchmark, the server was equipped with 1.5 TB of memory and four 2.8-GHz Intel Xeon processor E7-4890 v2 CPUs. The system ran Microsoft SQL Server 2014 Enterprise Edition and Windows. Check out the Performance Brief for additional information on the benchmark configuration. The detailed official benchmark disclosure report is available at the TPC Results Highlights Website.

Some of the key highlights of Cisco’s TPC-H Benchmark results are:

  • The Cisco UCS® C460 M4 Rack Server delivered the highest TPC-H result ever reported for non-clustered systems at the1000-GB scale factor.
  • High Performance for Microsoft SQL Server 2014: Cisco’s result is the fastest server at the 1000-GB scale factor running Microsoft SQL Server.
  • As illustrated in the graph below, the Cisco performance result beats Fujitsu, Dell, and IBM top results for the 1000-GB scale factor by 80, 31, and 13 percent respectively. Cisco’s price/performance ratio is 29 percent less than the IBM result

 

C460 TPC-H Results
It is interesting to note that although all vendors have access to same Intel processors, only Cisco UCS unleashes their power to deliver high performance to applications through the power of unification. The unique, fabric-centric architecture of Cisco UCS integrates the Intel Xeon processors into a system with a better balance of resources that brings processor power to life. For additional information on Cisco UCS and Cisco UCS Integrated Infrastructure solutions please visit Cisco Unified Computing & Servers web page.

Disclosure

The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks, and to disseminate objective and verifiable performance data to the industry. TPC membership includes major hardware and software companies. TPC-H, QphH, and $/QphH are trademarks of the Transaction Processing Performance Council (TPC). The performance results described in this document are derived from detailed benchmark results available as of December 15, 2014, at http://www.tpc.org/tpch/default.asp.

Tags: , , , ,

Cisco UCS Mini Wins infoTECH Spotlight Award

December 18, 2014 at 1:13 pm PST

Technology Marketing Corporation (TMC) announced the winners of the 2014 infoTECH Spotlight Data Center Excellence Awards today. Cisco is honored that UCS Mini is one of the recipients! To quote from the TMC press release:

“The 2014 infoTECH Spotlight Data Center Excellence Award recognizes the most innovative and enterprising data center vendors who offer infrastructure or software, servers or cooling systems, cabling or management applications.”

Read More »

Tags: , , ,