Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco and OpenStack: Juno Release – Part 1

The next stable OpenStack release codenamed “Juno” is slated to be released October 16, 2014. From improving live upgrades in Nova to enabling easier migration from Nova Network to Neutron, the OpenStack Juno release will address operational challenges in addition to providing many new features and enhancements across all projects.

As indicated in the latest Stackalytics contributor statistics, Cisco has contributed to seven different OpenStack projects including Neutron, Cinder, Nova, Horizon and Ceilometer as part of the Juno development cycle. This is up from five projects in the Icehouse release. Cisco also ranks first in the number of completed blueprints in Neutron as well.

In this blog post, I’ll focus on Neutron contributions, which are the major share of contributions in Juno from Cisco.

blueprint_completed blueprint_completed_neutron

Cisco OpenStack team lead Neutron Community Contributions

An important blueprint that Cisco collaborated on and implemented with the community was to develop the Router Advertisement Daemon (radvd) for IPv6. With this support, multiple IPv6 configuration modes including SLAAC and DHCPv6 (both Stateful and Stateless modes) are now possible in Neutron. The implementation provides for running a radvd process in the router namespace for handling IPv6 auto address configuration.

To support the distributed routing model introduced by Distributed Virtual Router (DVR), this Firewall as a Service (FWaaS) blueprint implementation handles firewalling North–South traffic with DVR. The fix ensures that firewall rules are installed in the appropriate namespaces across the Network and Compute nodes to support perimeter firewall (North-South). However, firewalling East-West traffic with DVR will be handled in the next development cycle as a Distributed Firewall use case.

Additional capabilities in the ML2 and services framework were contributed for enabling better plugin and vendor driver integration. This included the following blueprint implementations -

Cisco device specific contributions in Neutron

Cisco added Application Policy Infrastructure Controller (APIC) ML2 MD and Layer 3 Service Plugin in the Juno development cycle. The ML2 APIC MD translates Neutron API calls into APIC data model specific requests and achieves tenant Layer 2 isolation through End-Point-Groups (EPG).

The APIC MD supports dynamic topology discovery using LLDP, reducing the configuration burden in Neutron for APIC MD and also ensures data is in-sync between Neutron and APIC. Additionally, the Layer 3 APIC service plugin enables configuration of internal and external subnet gateways on routers using Contracts to enable communication between EPGs as well as provide external connectivity. The APIC ML2 MD and Service Plugin have also been made available with OpenStack IceHouse release. Installation and Operation Guide for the driver and plugin is available here.

Enterprise-class virtual networking solution using Cisco Nexus1000v is enabled in OpenStack with its own core plugin. In addition to providing host based overlays using VxLAN (in both unicast and multi-cast mode), it provides Network and Policy Profile extensions for virtual machine policy provisioning.

The Nexus 1000v plugin added support for accepting REST API responses in JSON format from Virtual Supervisor Module (VSM) as well as control for enabling Policy Profile visibility across tenants. More information on features and how it integrates with OpenStack is provided here.

As an alternative to the default Layer 3 service implementations in Neutron, a Cisco router service plugin is now available that delivers Layer 3 services using the Cisco Cloud Services Router(CSR) 1000v.

The Cisco Router Service Plugin introduces a notion of “hosting device” to bind a Neutron router to a device that implements the router configuration. This allows the flexibility to add virtual as well as physical devices seamlessly into the framework for configuring services. Additionally, a Layer 3+ “configuration agent” is available upstream as well that interacts with the service plugin and is responsible for configuring the device for routing and advanced services.  The configuration agent is multi-service capable, supports configuration of hardware or software based L3 service devices via device drivers and also provides device health monitoring statistics.

The VPN as a Service (VPNaaS) driver using the CSR1000v has been available since the Icehouse release, as a proof-of-concept implementation. The Juno release enhances the CSR1000v VPN driver such that it can be used in a more dynamic, semi-automated manner to establish IPSec site-to-site connections, and paves the way for a fully integrated and dynamic implementation with the Layer 3 router plugin planned for the Kilo development cycle.

Summary

The OpenStack team at Cisco has led, implemented and successfully merged upstream numerous blueprints for the Neutron Juno release.  Clearly, some have been critical for the community and others enable customers to better integrate Cisco networking solutions with OpenStack Networking.

Stay tuned for more information on other project contributions in Juno and on Cisco lead sessions at the Kilo Summit in Paris !

You can also download OpenStack Cisco Validated Designs, White papers, and more at www.cisco.com/go/openstack

Tags: , , , , , , ,

MDS 9700 Scale Out and Scale Up

This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure.  MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.

Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs

  • Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
  • The process should not be disruptive to the current installation for cabling, performance impact or downtime
  • The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level

Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.

Edge Core Design Day 1

Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.

Edge Core Design Scale UP

Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.

Edge Core Design Day1

 

Few years down the road customer may wants to add additional 6,144 8G ports  and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps.  Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift. Edge Core Edge Design Scale Out

Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.

Edge Core Edge Example 1 ScaleUP

For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.

 

Edge Core Edge Design Scale Out Scale Up

 

As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.

“Good design doesn’t date. Bad design does.”
Paul Rand

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

UCS M-Series System Link Technology: The converged infrastructure story.

It almost feels like this blog entry should start with: Once upon a time….   Because it captures a journey of a young emerging technology and the powerful infrastructure tool it has become. The Cisco UCS journey starts with the tale of Unified Fabric and the Converged Network Adapter (CNA).

Most people think of Unified Fabric as the ability to put both Fiber Channel and Ethernet on the same wire between the server and the Fabric Interconnect or upstream FCoE switchs.  That is part of the story, but that part is as simple as putting a Fiber Channel frame inside of an Ethernet frame.   What is the magic that makes this happen at the server level?  Doesn’t FCoE imply that the Operating System itself would have to know how to present a Fiber Channel device in software and then encapsulate and send the frame across the Ethernet port?   Possibly, but that would require OS FCoE software support which would also require CPU overhead and require end users to qualify these new software drivers and compare the performance of software against existing hardware FC HBAs.

For UCS the key to the success of converged infrastructure was due greatly to the very first Converged Network Adapters that were released.  These adapters presented existing PCIe Fiber Channel and Ethernet endpoints to the operating system.  This required no new drivers or new qualification from the perspective of the operating system and users.  However at the heart of this adapter was a Cisco ASIC that provided two key functions:

1.)  Present the physical functions for existing PCIe devices to the operating system without the penalty of PCIe switching.

2.)   Encapsulate Fiber Channel frames into an Ethernet frame as they are sent to the northbound switch.

Converged Network Adapter

Converged Network Adapter

It is the second function that we often focus on because that’s the cool networking portion that many of us at Cisco like to talk about.  But how exactly do we convince the operating system that it is communicating with an Intel Dual port Ethernet NIC and a Dual port 4GB Qlogic Fiber Channel HBA?  I mean these are the exact same drivers that we use for the actual Intel and Qlogic card, there’s got to be some magic there right?

Well, yes and no.  Lets start with the no.  Presenting different physical functions (PCIe endpoints) on a physical PCIe card is nothing new.  It’s as simple as putting a PCIe switch between the bus and the endpoints.  But like all switching technologies a PCIe switch incurs latency and it cannot encapsulate a FC frame into an Ethernet frame.  So that’s where the magic comes into play.  The original Converged Network Adapater contained a Cisco ASIC that sits on the PCIe bus between the Intel and Qlogic physical functions.  From the operating system perspective the ASIC “looks” like a PCIe switch providing direct access to the the Ethernet and Fiber Channel endpoints, but in reality it has the ability to move I/O in and out of the physical functions without incurring the latency of a switch.  The ASIC also provides a mechanism for encapsulating the FC Frames into a specific Ethernet frame type to provide FCoE connectivity upstream.

The pure beauty of this ASIC is that we have evolved it from the CNA to the Virtual Interface Card (VIC). These traditional CNAs have a limited number of Ethernet and FC ports available to they system (2 each) based on the chipsets installed on the card.  The Cisco VIC provides a variety of vNICs and vHBAs to be created on the card.  The VIC not only virtualizes the PCIe switch, it virtualizes the I/O endpoint.

Cisco Virtual Interface Card

Cisco Virtual Interface Card

So in essence what we have created with the Cisco ASIC, that drives the VIC, is a device that can provide a standard PCIe mechanism to present an end device directly to the operating system.  This ASIC also provides a hardware mechanism designed to receive native I/O from the operating system and encapsulate and translate where necessary without the need for OS stack dependencies, for example native Fiber Channel encapsulated into Ethernet.

At the heart of the UCS M-Series servers is the System Link Technology.  It is this specific component that provides access to the shared I/O resources in the chassis to the compute nodes.  System Link Technology is the 3rd Generation technology behind the VIC and the 4th Generation technology for Unified Fabric within the construct of Unified Computing.  The key function of the System Link Technology is the creation of a new PCIe physical function called the SCSI NIC (sNIC) that presents a virtual storage controller to the operating system and maps drive resources to a specific service profile within Cisco UCS.

System Link Technology

System Link Technology

It is this innovative technology that provides a mechanism for each compute node within UCS M-Series to have it’s own specific virtual drive carved out of the available physical drives within the chassis.  This is accomplished using standard PCIe and not MR-IOV.  Therefore it does not require any special knowledge of a change in the PCIe frame format by the operating system.

For a more detailed look at System Link Technology in the M-Series check out the following white paper.

The important thing to remember is that hardware infrastructure is only part of the overall architectural design for UCS M-Series. The other component that is key to UCS is the ability to manage the virtual instantiations of the system components.  In the next segment on UCS M-Series Mahesh will discuss how UCS Manager rounds out the architectural design.

Tags: , , , , ,

#EngineersUnplugged S6|Ep11: #UCSGrandSlam from Admin View

October 1, 2014 at 10:35 am PST

In this week’s episode of Engineers Unplugged, Roving Reporter Lauren Malhoit (@malhoit) talks to Adam Eckerle (@eck79) about #UCSGrandSlam and UCS as a platform. Great view of the tech from the admin perspective: “UCS is built from the ground up with virtualization in mind.” Great practical episode for anyone exploring UCS!

Much like UCS, the Unicorn Challenge is a platform for creativity.

Much like UCS, the Unicorn Challenge is a platform for creativity.

**Want to be Internet Famous? Join us for our next shoot: VMworld Barcelona. Tweet me @CommsNinja!**

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , ,

Automated PBR and Route Health Injection with RISE

RISE is an innovative architecture that logically integrates an external service appliance such as Citrix NetScaler or the Cisco Prime NAM so that it appears & operates as a service module within the Nexus 7000 Series switches.
RISE integration with the Citrix NetScaler provides features like Route Health Injection (RHI) and Automated PBR (APBR) which allow easy configuration to redirect client and server traffic to the load balancer.
Screen Shot 2014-09-26 at 11.47.15 AM

 

Automated Policy Based Routing (APBR)
Existing solutions to have server traffic return to the load balancer are Source NAT and PBR. Using Source NAT causes applications (server) to lose the visibility to client IP, burning IP address pool for Source NAT configuration and manual configuration. Policy Based Routing (PBR) requires complex initial configuration from the user (susceptible to human errors), configuration updates when a server is added or removed which can be cumbersome as the number of network devices and servers/VIPs grow.
  • Auto PBR eliminates the need for Source-NAT or manual PBR configuration in an one-arm mode design of load balancers
  • Preserves client IP visibility for applications/servers without the need for manual PBR
  • APBR feature allows the NetScaler to program policies on the N7K server-facing interfaces to redirect return traffic to the NetScaler appliance set up in one-arm mode
  • NetScaler passes information about real servers to N7K via the RISE channel and a policy is applied on the N7K interface through which the real server can be best reached
  • Since it is desirable to change the SRC IP to VIP for the return traffic, the APBR policies redirect traffic to the NetScaler IP without modifying the packet
  • The NS appliance will then direct the packet to the client by changing the source IP to VIP
Screen Shot 2014-09-26 at 11.51.47 AM
Please reach out to nxos-rise@cisco.com for more information on RISE features.
Resources

RISE At A Glance white paper: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/at-a-glance-c45-731306.pdf

RISE announcement blog: http://blogs.cisco.com/datacenter/rise

RISE Video at Interop: https://www.youtube.com/watch?v=1HQkew4EE2g

Cisco RISE page: www.cisco.com/go/rise

 

Tags: , , , , , , , ,