Cisco Blogs


Cisco Blog > Data Center and Cloud

#EngineersUnplugged S7|Ep1: SAP HANA 101

December 9, 2014 at 9:19 am PST

In this episode of Engineers Unplugged, Tony Harvey (@tonyknowspower) and Craig Sullivan (@craigsullivan70) discuss the role of storage in SAP HANA. How does big data impact you? Watch and learn.

 

https://www.youtube.com/watch?v=TmZwq1UCo8w

Thinking about unicorns.

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , , ,

#EngineersUnplugged S6|Ep14: Policy-Based Management and VVols

October 29, 2014 at 9:59 am PST

In this episode of Engineers Unplugged, Nick Howell (@datacenterdude) and Gabriel Chapman (@bacon_is_king) discuss policy-based management and VVOLs. This is a storage deep-dive for the modern age, don’t miss it!

Not only has storage evolved, so have the unicorns.

The evolution of the storage unicorns.

The evolution of the storage unicorns.

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , ,

#EngineersUnplugged S6|Ep13: #OpenStack and Cinder

October 22, 2014 at 12:31 pm PST

In this week’s episode of Engineers Unplugged, John Griffith (@jdg_8) and Kenneth Hui (@hui_kenneth) discuss Cinder--a way to abstract and give you block storage services inside of OpenStack. Great info with practical applications in this second of our series on OpenStack leading up to OpenStack Summit in Paris.

And let there be whiteboards and unicorns!

Cinder + OpenStack + Unicorns (courtesy of John Griffith and Kenneth Hui!)

Cinder + OpenStack + Unicorns (courtesy of John Griffith and Kenneth Hui!)

**Want to be Internet Famous? Act now! Join us for our next shoot: NetApp Insight. Tweet me @CommsNinja!**

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , ,

MDS 9700 Scale Out and Scale Up

This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure.  MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.

Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs

  • Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
  • The process should not be disruptive to the current installation for cabling, performance impact or downtime
  • The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level

Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.

Edge Core Design Day 1

Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.

Edge Core Design Scale UP

Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.

Edge Core Design Day1

 

Few years down the road customer may wants to add additional 6,144 8G ports  and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps.  Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift. Edge Core Edge Design Scale Out

Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.

Edge Core Edge Example 1 ScaleUP

For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.

 

Edge Core Edge Design Scale Out Scale Up

 

As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.

“Good design doesn’t date. Bad design does.”
Paul Rand

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Next Generation Data Center Design With MDS 9700 – Part III

This week is exciting, had opportunity to sit on round table with Cisco’s largest customers on an open ended architecture discussion and their take on past, present and future. More on that some other time let’s pick up last critical aspect of High Performance Data Center design namely flexibility. Customers need flexibility to adapt to changing requirements over time as well as to support diverse requirements of their users. Flexibility is not just about protocol, although protocol is very important aspect, but it is also about making sure customers have choice to design, grow and adapt their DC according to their needs. As an example if customers want to utilize the time to market advantage and ubiquity of Ethernet they can by adopt FCoE.

Flexibility

Moreover flexibility has to be complemented by seamless integration where customers can not only mix and match the architectures/protocols/speeds but also evolve from one to other over time with minimal disruption and without forklift upgrades. Investment protection of more than a decade on Cisco director switches allows customer to move to higher speeds, or adopt new protocols using the existing chassis and fabric cards. Finally any solution should allow scalability over time with minimal disruptions and common management model. As an example on MDS 9710 or MDS 9706 customers can choose to use 2/4/8 G FC, 4/8/16G FC, 10G FC or 10G FCoE at each hop.

Multiprotocol Innovation

Let’s review each aspect of flexibility at a time.

 

Architecture:

Cisco SAN product family is designed to support Architecture flexibility. From smallest to  the largest customers and everything in-between.  Customers can grow from 12 16G ports to 48 ports on a single 9148S. They can grow from 48 16G Line Rate Ports to 192 16G Line Rate with MDS 9710 and upto 384 ports on MDS 9710. Finally having seamless FC and FCoE capability allows customers to use these directors as edge or core switches . With the industry leading scalability numbers, customers can scale up or scale out as per their needs. Two examples show how customers can use Director class switches (9513, 9506, 9710 or 9706) based Architecture for End of Row designs. Similarly customers can orchestrate Top of Rack designs using Nexus fixed family or MDS 9148S.

Examples of Edge Core Designs with MDS ToR and EoR

If they want to continue with FC for foreseeable future or have sizable FC infrastructure that they want to leverage (and have option to go to FCOE) then MDS serves their needs. Similarly they can support edge core designs, and edge core edge designs or even collapsed  cores if so desired.

 

FC Edge Core and Edge Core Edge

 

If customers need converged switch then Nexus 2K, 5K and 6K provides the flexibility, ability to collapse two networks, simplify management as shown in the picture below.

FEX and Nexus Edge Design Options

Speeds

Customers can mix and match the FC speeds 2G/4G/8G, 4G/8G/16G on the latest MDS 9148S, and MDS 9700 product family. With all the major optics supported, customers can pick and choose optics for the smallest distance to long distance CWDM and DWDM solutions in addition to SW, LW and ER optics choices. In addition MDS 9700 supports 10GE optics running 10G FC traffic for ease of implementing 10G DWDM solutions based on ubiquitous 10GE circuits.

Protocol

FC is a dominant protocol with DC but at the same time a lot of customers are adopting FCoE to improve ROI, simplify the network or simply to have higher speeds and agility. Irrespective of the needs and timeline MDS solution allows customer to adopt FCoE today or down the road without forklift upgrades on the existing MDS 9700 platforms while leveraging the existing FC install base.

FCoE Flexibility

The diagram above shows how customers can collapse LAN and SAN networks on the edge into one network. The advantage of FEX include reduced TCO, simplified operations (Parent switch provides a single point of management and policy enforcement and Plug-and-play management includes auto-configuration).

Another example to allow non transition less disruptive for customers Cisco has supported the BiDi optics on the Nexus product family. This allows customers to use the the same same OM2, OM3 and OM4 fabrics for 40G FCoE connectivity and still don;t have to rip and replace cabling plant.

BiDi Option

For customer who are not ready to converge networks but want to achieve faster time to market, higher performance, Ethernet scale economies can use separate LAN and SAN network and use FCoE for that dedicated SAN .

Evolution path from FC to FCoE

Coupled with broad Cisco product portfolio means that customers have the maximum flexibility to tune the architecture precisely to their needs. Cisco product portfolio is tightly integrated, all the SAN switches use same NxOS and DCNM provides seamless manageability across LAN, SAN, Converged infrastructure to Fabric Interconnects on UCS.

Broad Product Portfolio

From the last 3 blogs lets quickly capture what are the unique characteristics of MDS 9700 that allows for High Performance Scalable Data Center Design.

24 Tbps Switching capacity, line rate 16g FC ports, No Oversubscription, local switching or bandwidth allocation.

Redundancy for every critical component in the chassis including Fabric Card. Data Resiliency with CRC check and Forward Error Correction. Multiple level of CRC checks, smaller failure domains.

In next few days lets put this all together to see how customers can deploy scalable networks that allow them to Scale Up or Scale Out in a non disruptive way.

To learn more about the MDS 9148S please join us for a webinar.

“In business, words are words; explanations are explanations, promises are promises, but only performance is reality.”

Harold S. Geneen

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,