Cisco is again a Premiere Sponsor of the OpenStack Summit, November 3-7 at Le Palais des Congrès in Paris. Here’s a summary of Cisco sponsored activities for your schedule.
Premier Breakout Session: “A World of Many (OpenStack) Clouds”
Wed. 05 Nov; 13:50 – 14:30
Cisco VP and Cloud CTO, Lew Tucker, will talk about how Cisco is working with leading service providers and enterprise customers to enable a world of interconnected clouds. Find out how Cisco is delivering greater automation, programmability, and openness for IT infrastructure, to support the next generation of virtualization and cloud.
Cisco Expo Booth, Location #C3
Stop by and pick up a special OpenStack@Cisco gift while supplies last. Cisco specialists in services, sales and product development will be available to chat and answer any questions.
Mon. 03 Nov: 8:15 – 9:30 and 11:15 – 19:30
Tues. 04 Nov: 10:45 – 18:00
Wed. 05 Nov: 9:00 – 16:30
See demonstrations of:
-OpenStack Networking Using Cisco CSR and Nexus
-Cisco UCS Integrated Infrastructure with Red Hat OpenStack Platform
-Group-Based Policy for Cloud Deployment
-Cisco UCS Bare-Metal-as-a-Service Cloud
Find out more about Metacloud, which officially became a part of Cisco on 17 SEP. Metacloud offers OpenStack clouds as a service, giving customers a choice of hosted or hybrid architecture, to operate like a public cloud from inside an organization’s own data center.
Breakout: Group Based Policy Extension for Networking
Mon. 03 Nov; 16:20 – 17:00
Sumit Naiksatam, Principal Engineer, Cisco
Breakout: Deploying and Auto-Scaling Applications on OpenStack with Heat
Tues. 04 Nov; 11:15 – 11:55
Daneyon Hansen, Software Engineer, Cisco
Panel Discussion: OpenStack Design Guide
Tues. 04 Nov; 14:00 – 14:40
Featuring: Maish Saidel-Keesing, Platform Architect, Cisco Video Technologies
Panel Discussion: Tips and Tools for Building a Successful OpenStack Group
Tues. 04 Nov; 14:50-15:30
Featuring Shannon McFarland, Principal Engineer and Mark T. Voelker, Technical Lead; Cisco
Breakout: Using Ceilometer Data to Detect Fraud in the OpenStack Cluster
Wed. 05 Nov; 9:50 – 10:30
Debojyoti Dutta, with Marc Solanas Tarre, Principal Engineers, Cisco
Breakout: Under the Hood with Nova, Libvirt and KVM (Part Two)
Wed. 05 Nov; 9:50 – 10:30
Rafi Khardalian, CTO, Metacloud/Cisco
Breakout: Scaling OpenStack Services: The Pre-TripleO Service Cloud
Wed. 05 Nov; 16:30 – 17:10
Kevin Bringard, with Richard Maynard
Technical Leads, Cisco
Evening Reception with Red Hat
Wed. 05 Nov; 20:00 – 2:00
Each attendee who completes the Red Hat and Cisco Booth Rally Challenge (instructions onsite) will receive a ticket for the Evening Reception held at Faust, an entertainment facility located at the foot of the Ivalides Esplanade, underneath the Alexandre III Bridge. Shuttle transportation will be available. Food and drinks will be served. This is an awesome location and might very well be the highlight of the week.
Tags: ACI, Cisco, InterCloud, lew tucker, Neutron, nexus, OpenStack, Paris, UCS
This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.
Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs
Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
The process should not be disruptive to the current installation for cabling, performance impact or downtime
The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level
Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.
Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.
Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.
Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift.
Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.
For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.
As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.
“Good design doesn’t date. Bad design does.”
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
Data Centers are becoming increasingly smart, intelligent and elastic. With the advancement in cloud and virtualization technologies, customers demand dynamic workload management, efficient and optimal use of their resources. In addition the configuration and administration of Data Center solutions is complex and is going to become increasingly so.
With these requirements and architectures in mind we have a industry first solution called Remote Integrated Service Engine (RISE). RISE is a technology that simplifies provisioning, out of box management of service appliances like load balancers, firewalls, network analysis modules. It makes data center and campus networks dynamic, flexible, easy to configure and maintain.
RISE can dynamically provision network resources for any type of service appliance (physical and virtual form factors). External appliances can now operate as integrated service modules with Nexus Series of switches without burning a slot in a switch . This technology provides robust application delivery capabilities that accelerate the application performance manifold.
RISE is supported on all Nexus Series switches with services like Citrix NetScaler MPX, VPX, SDX and Cisco Prime NAM with many more in the pipeline.
Advantages & Features
- Simplified Out-of-Box experience : reduces the administrator’s manual configuration steps from 30 to 8 steps !!
- Supported on Citrix NetScaler MPX, SDX, VPX, and Nexus 1KV with VPX
- Supported on Cisco Prime Network Analyzer Module
- Automatic Policy Based Routing – Eliminates need for SNAT or Manual PBR
- Direct and Indirect Attach mode integration
- Show module for RISE
- Attach module for RISE
- Auto Attach – Zero touch configuration of RISE
- Health Monitoring of appliance
- Appliance HA and VPC supported
- Nexus 5K/6K support (EFT available)
- IPV6 support (EFT available)
- DCNM support
- Order of magnitude OPEX savings: reduction in configuration, and ease of deployment
- Order of magnitude CAPEX savings: Wiring, Power Rackspace and Cost savings
For more information, schedule an EFT or POC Contact us at firstname.lastname@example.org
RISE press release on Wall Street Journal : http://online.wsj.com/article/PR-CO-20140408-905573.html
RISE At A Glance white paper: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/at-a-glance-c45-731306.pdf
RISE Video at Interop: https://www.youtube.com/watch?v=1HQkew4EE2g
Cisco RISE page: www.cisco.com/go/rise
Gartner blog on RISE: “Cisco and Citrix RISE to the Occasion”: http://blogs.gartner.com/andrew-lerner/2014/03/31/cisco-and-citrix-rise-to-the-adc-occasion/
Tags: 7000, Cisco, Cisco Nexus Switches, Cisco Prime NAM, Citrix NetScaler, Citrix NetScaler VPX, cloud, data center, innovation, nexus, Nexus 7000, partner, RISE, virtualization
Note: This is the third of a three-part series on Next Generation Data Center Design with MDS 9700; learn how customers can deploy scalable SAN networks that allow them to Scale Up or Scale Out in a non disruptive way. Part 1 | Part 2 ]
This week is exciting, had opportunity to sit on round table with Cisco’s largest customers on an open ended architecture discussion and their take on past, present and future. More on that some other time let’s pick up last critical aspect of High Performance Data Center design namely flexibility. Customers need flexibility to adapt to changing requirements over time as well as to support diverse requirements of their users. Flexibility is not just about protocol, although protocol is very important aspect, but it is also about making sure customers have choice to design, grow and adapt their DC according to their needs. As an example if customers want to utilize the time to market advantage and ubiquity of Ethernet they can by adopt FCoE.
Moreover flexibility has to be complemented by seamless integration where customers can not only mix and match the architectures/protocols/speeds but also evolve from one to other over time with minimal disruption and without forklift upgrades. Investment protection of more than a decade on Cisco director switches allows customer to move to higher speeds, or adopt new protocols using the existing chassis and fabric cards. Finally any solution should allow scalability over time with minimal disruptions and common management model. As an example on MDS 9710 or MDS 9706 customers can choose to use 2/4/8 G FC, 4/8/16G FC, 10G FC or 10G FCoE at each hop.
Let’s review each aspect of flexibility at a time.
Cisco SAN product family is designed to support Architecture flexibility. From smallest to the largest customers and everything in-between. Customers can grow from 12 16G ports to 48 ports on a single 9148S. They can grow from 48 16G Line Rate Ports to 192 16G Line Rate with MDS 9710 and upto 384 ports on MDS 9710. Finally having seamless FC and FCoE capability allows customers to use these directors as edge or core switches . With the industry leading scalability numbers, customers can scale up or scale out as per their needs. Two examples show how customers can use Director class switches (9513, 9506, 9710 or 9706) based Architecture for End of Row designs. Similarly customers can orchestrate Top of Rack designs using Nexus fixed family or MDS 9148S.
If they want to continue with FC for foreseeable future or have sizable FC infrastructure that they want to leverage (and have option to go to FCOE) then MDS serves their needs. Similarly they can support edge core designs, and edge core edge designs or even collapsed cores if so desired.
If customers need converged switch then Nexus 2K, 5K and 6K provides the flexibility, ability to collapse two networks, simplify management as shown in the picture below.
Customers can mix and match the FC speeds 2G/4G/8G, 4G/8G/16G on the latest MDS 9148S, and MDS 9700 product family. With all the major optics supported, customers can pick and choose optics for the smallest distance to long distance CWDM and DWDM solutions in addition to SW, LW and ER optics choices. In addition MDS 9700 supports 10GE optics running 10G FC traffic for ease of implementing 10G DWDM solutions based on ubiquitous 10GE circuits.
FC is a dominant protocol with DC but at the same time a lot of customers are adopting FCoE to improve ROI, simplify the network or simply to have higher speeds and agility. Irrespective of the needs and timeline MDS solution allows customer to adopt FCoE today or down the road without forklift upgrades on the existing MDS 9700 platforms while leveraging the existing FC install base.
The diagram above shows how customers can collapse LAN and SAN networks on the edge into one network. The advantage of FEX include reduced TCO, simplified operations (Parent switch provides a single point of management and policy enforcement and Plug-and-play management includes auto-configuration).
Another example to allow non transition less disruptive for customers Cisco has supported the BiDi optics on the Nexus product family. This allows customers to use the the same same OM2, OM3 and OM4 fabrics for 40G FCoE connectivity and still don;t have to rip and replace cabling plant.
For customer who are not ready to converge networks but want to achieve faster time to market, higher performance, Ethernet scale economies can use separate LAN and SAN network and use FCoE for that dedicated SAN .
Coupled with broad Cisco product portfolio means that customers have the maximum flexibility to tune the architecture precisely to their needs. Cisco product portfolio is tightly integrated, all the SAN switches use same NxOS and DCNM provides seamless manageability across LAN, SAN, Converged infrastructure to Fabric Interconnects on UCS.
From the last 3 blogs lets quickly capture what are the unique characteristics of MDS 9700 that allows for High Performance Scalable Data Center Design.
24 Tbps Switching capacity, line rate 16g FC ports, No Oversubscription, local switching or bandwidth allocation.
Redundancy for every critical component in the chassis including Fabric Card. Data Resiliency with CRC check and Forward Error Correction. Multiple level of CRC checks, smaller failure domains.
In next few days lets put this all together to see how customers can deploy scalable networks that allow them to Scale Up or Scale Out in a non disruptive way.
To learn more about the MDS 9148S please join us for a webinar.
“In business, words are words; explanations are explanations, promises are promises, but only performance is reality.”
Harold S. Geneen
Tags: 16 Gigabit, 16G FC, 16Gb, 16Gb Fibre Channel, 192 Port, 9148S, 9706, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization
As the breadth and depth of the ACI solution continues to grow, so does customer interest. Many customers who have invested in, and continue to invest in, the Nexus 2000-7000 switches find the ACI vision very compelling. So, this leads to a logical question regarding how an existing Nexus 2000-7000 fabric will integrate with an ACI fabric.
In short, customers can leverage current Nexus products and add ACI capabilities to their data centers in an incremental manner. Integrating ACI into an existing Nexus environment will not require replacement of existing Nexus switches. The benefits of ACI policy can be extended to apps on both physical and virtual servers within the existing Nexus fabric. This can be achieved as follows (double click on the graphic below to launch the 3+ minute presentation):
In this scenario, the existing Nexus fabric is serving as an optimized transport for an ACI overlay solution. However, this solution is very different from other industry overlay solutions. It’s different in that the ACI overlay provides integrated/embedded support for both physical and virtual servers, it allows use of existing L4-7 infrastructure, while providing the automation functionality of the ACI policy model.
If you’d like to learn more, there is a summary, as well as a white paper available. There is also this video whiteboard session that covers a subset of the elements mentioned above.
Tags: ACI, cloud, data center, network, networking, nexus, technology