Cisco Blogs


Cisco Blog > Data Center and Cloud

New Cisco APIC Software allows stretched ACI Fabric across long distances

In the world of Cisco ACI, there is never a shortage of excitement and action. Today, we are pleased to bring to your attention news about the latest Cisco APIC software release. If you wonder what’s hot of the press in APIC SW release 1.0(3f) for Nexus 9000 series ACI mode, there are quite a few.

The Stretched Fabric feature captures the headlines. For quite some time now customers have been asking for an ACI Fabric that can stretch across datacenters and over long distances. The new software allows for each leaf and spine, that participate in creating a fabric, to be located up to 30 KMs apart.  It also removes the restriction for every leaf to be connected to all spines. Let us take a close peek at the stretched fabric feature.

ACI Stretched Fabric Topology

Stretched ACI fabric is a single fabric. It is a partially meshed design that connects ACI leaf and spine switches distributed in multiple locations. Typically, an ACI fabric implementation is a single site where the full mesh design connects each leaf switch to each spine switch in the fabric.  This yields the best throughput and convergence. In multi-site scenarios, full mesh connectivity may be not possible or may be too costly. Multiple sites, buildings, and rooms can span distances that are not serviceable by enough fiber connections, or are too costly to connect each leaf switch to each spine switch across the sites.  Diagram below illustrates the stretched fabric architecture.

Transit Leaf Switch Guidelines

Transit leaf refers to the leaf switches that provide connectivity between two sites. Transit leaf switches connect to spine switches on both sites. There are no special requirements and no additional configurations required for transit leaf switches

Provision Transit and Border Leaf Functions on Separate Switches

The key benefits of stretched fabric include workload portability and VM mobility.The stretched ACI fabric behaves the same way as a regular ACI fabric, supporting full VMM integration. For example, one VMWare vCenter operates across the stretched ACI fabric sites. The ESXi hosts from both sites are managed by the same vCenter and Distributed Virtual Switch (DVS).  They are stretched between the two sites.

The ACI switch and APIC  software recover from various failure scenarios. Check out the failover scenario analysis for details.

Additional resources

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/release/notes/aci_nxos_rn_1103.html

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_kb-aci-stretched-fabric.html

www.cisco.com/go/aci

 

Tags: , , , , , ,

MP-BGP eVPN control plane for VXLAN – SDN is growing up

We are all proud parents of our products as developers, much like our own children, we see them born, care and feed for them, watch them carefully as they are unstable during early years, we do not go out much, they become more stable over time, and then something happens – they grow up and there is a need to interact with others.  This could describe some of early customer experiences with first generation SDN LAN Emulation technologies.

Cisco Systems introduction and support of Multi-Protocol BGP eVPN control plane for VXLAN https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11 is an indication that the SDN industry is growing up, leveraging standards-track protocols, and enabling SDN to scale and interact with others.  This is far more significant to the SDN industry than one can read in a single press release and we will expand on its relevance in this blog.

Let’s start with some basic understanding and a bit of SDN history.

SDN encapsulation into overlays or tunnels are not new technologies and have been supported for many years https://tools.ietf.org/html/rfc1701 describes GRE encapsulation and was written in 1994.  Anyone who uses a VPN also uses encapsulation such as IPSEC, so nothing new.  What is new are the SDN controller applications, how they enable logical network functions, and support centralized automation of the infrastructure for data center networks.  I will not go into all of the use cases for SDN overlays as you can find those readily by speaking to your vendor or searching for them on the web.

There are multiple controller architectures available for SDN. I will simply characterize them in three buckets and two additional qualifiers; OpenFlow, Integrated, and Decoupled are the three buckets;  SDN LAN Emulation and Policy-based are two qualifiers.  Today much of the confusion for customers is that vendors are still debating about, and attempting to monetize, “their” method of SDN.

There are key distinctions between the two qualifiers SDN LAN Emulation and Policy-based.  SDN LAN Emulation controllers reproduce properties of the layer 2 and layer 3 networks in the overlay including address learning and distribution, leveraging x86 servers to emulate LAN functions, where the overlay termination end points map logical network destinations to physical next hop in the overlay.

Policy-based controllers use fewer x86 servers by just mapping policy at the physical or virtual switch, benefitting from the integration of the overlay into vSwitches, merchant, and custom ASIC switches in an an open and cooperative manner which eliminates the need for LAN Emulation x86 components, providing more scale and far fewer components than SDN LAN Emulation models.

Five to six years ago the SDN industry started with controller applications providing a software function described with an ability to reproduce network functions from the physical network into a logical network, and overlay that logical network on top of the physical infrastructure.  I refer to this reproduction as SDN LAN Emulation as it has similarities to ATM LAN Emulation https://www.broadband-forum.org/ftp/pub/approved-specs/af-lane-0021.000.pdf .  

Similar to how virtualization evolved in compute, starting with software only, followed by Intel introducing VT and AMD introducing -V, for the purpose that virtualization worked better when it was cooperative with hardware.  In the early days of SDN LAN Emulation controllers, none of the overlay or gateway functions existed in hardware, but today 70 to 90 percent of the SDN LAN Emulation controller use cases exist in every merchant ASIC from Broadcom.

SDN controllers do three basic operations, they run the SDN application, described often as a distributed computing application, they expose a northbound API for orchestration, and they expose a southbound API for programming physical and virtual overlay termination end points.  The overlay termination end points are referred to here as VTEP’s (VXLAN Tunnel End Point) for VXLAN, as it is the most common encapsulation and end point discussed for SDN today.

This is a basic, but fair characterization of SDN controllers, irrespective of whether they are coupled, decoupled, LAN Emulation, or Policy-based.  Integrated controllers provide the SDN controller running the application, the northbound API, the southbound API, and the VTEP.  Decoupled controllers do all of the items mentioned below, but they are meant to support the integration of separate components from third party vendors in each of the afore mentioned categories.

Examples of integrated controllers are VMware NSX and Cisco ACI.  In each of these implementations, the SDN controller application, the API’s north and south, and either a physical or virtual VTEP is provided by the same vendor.

VMware NSX is an SDN LAN Emulation controller that integrates with the NSX vSwitch VTEP provided by VMware for vSphere. Today VMware has a Multi-hypervisor product that enables the NSX Multi-hypervisor controller with a VMware supplied version of Open vSwtich to speak with Xen and KVM hypervisors (you must get VMware’s version of OVS).  VMware tightly controls the vSwitch API’s for VTEP’s in the kernel in vSphere, unlike that of RedHat, Xen Server, and Microsoft.  VMware leverages the informational RFC, OVSDB https://tools.ietf.org/html/rfc7047 to integrate with some vSwitches and third party hardware VTEP’s.

Cisco Systems Application Centric Infrastructure (ACI) is a policy-based controller architecture with the Application Policy Infrastructure Controller or APIC, API’s north and south, physical and virtual VTEP’s.  Cisco works with open hypervisor vswitches such as OVS from Xen and KVM, Hyper-V, VMware VDS, VMware VSS, Cisco Systems Application Virtual Switch (AVS), and the Cisco Nexus 1000v, third party hardware VTEP vendors, virtual and physical layer 4 – 7 appliance vendors, each integrating the OPFLEX control protocol (outside of VMware provided vSwitches) http://tools.ietf.org/id/draft-smith-opflex-01.txt for a southbound API and distributed control system leveraging a declarative policy model.  The northbound and southbound API’s are fully published from Cisco with ACI.  Cisco provided VTEP’s, both physical and virtual, also support or integrate directly with Multi-protocol BGP eVPN as a control plane for VXLAN.

Multi-protocol BGP eVPN as a Control Plane for VXLAN is a standards-track, distributed control plane offering a significant shift in customers ability to build and interconnect SDN overlay networks, while removing the need to run or configure multicast routing in the physical network.

A little more background is required to understand why Multi-protocol BGP eVPN as a control plane for VXLAN is so significant, so please bear with me a few more paragraphs, as the point is coming.

Various SDN controllers including VMware NSX, leverage the informational RFC OVSDB https://tools.ietf.org/html/rfc7047. OVSDB is a management protocol supporting programmability between an SDN controller a vSwitch or hardware VTEP, providing configuration such as termination of tunnels in an overlay network.  The OVSDB VTEP.5 schema is shown below:

VTEP5 Schema – http://openvswitch.org/docs/vtep.5.pdf
Table — Purpose
Global — Top-level configuration
Manager — OVSDB management connection
Physical_Switch — A physical switch
Physical_Port — A port within a physical switch
Logical_Binding_Stats — Statistics for a VLAN on a physical port bound to a logical network
Logical_Switch — A layer−2 domain
Ucast_Macs_Local — Unicast MACs (local)
Ucast_Macs_Remote — Unicast MACs (remote)
Mcast_Macs_Local — Multicast MACs (local)
Mcast_Macs_Remote — Multicast MACs (remote)
Logical_Router — A logical L3 router.
Physical_Locator_Set — Physical_Locator_Set configuration
Physical_Locator — Physical_Locator configuration

Looking at the table above you quickly realize this represents a limited set of options that will require more interaction between the SDN controller and the VTEP being configured leveraging OVSDB than what is defined in the spec.  There are multiple elements in this table, but the primary element is to carry layer 2 reachability information in the overlay and communicate that between the controllers and VTEP’s.  An SDN LAN Emulation controller leveraging OVSDB is involved in address learning and distribution of addresses to the VTEP’s.  This means that the data path is dependent upon the capacity of the x86 platforms running the controller software, it’s ability to learn and distribute addresses to the VTEP’s, and the VTEP’s need to be tightly coupled to the OVSDB spec leveraging an imperative model.  Any feature must be conceptualized in the SDN LAN Emulation environment and then mapped to the data path at the VTEP’s doing the forwarding or gateway functions.

This is a major friction point in large SDN installations because the controller dictates the feature velocity and scale, the VTEP features must be tightly aligned with this model, and any feature changes are limited by the development of this specification which is an informational draft. VTEP’s are primarily ToR’s and vSwitches.  Any other configuration or innovation must be controlled through vendor integration outside of the specification and coordinated across the platforms – features such as VTEP or gateway HA, link management, or others as an example. This is where marketing open moves to the reality of vendor dependence and integration.

Vendors exclusively supporting OVSDB as a management protocol and schema for third party hardware VTEP’s and to integrate with vSwitches are limited by the scale and open or integration implications of this model.  Think back however, the basic function for OVSDB is to enable layer 2 reachability information in the overlay.

What happens if you want to extend your layer 2 and layer 3 information across a data center interconnect, to WAN routers, or across overlay networks that may have other SDN controllers, leveraging a standards-based protocol?

Enter eVPN MP-BGP EVPN control plane for VXLAN.

MP-BGP EVPN control plane for VXLAN offers the following key benefits:
Leveraging an industry standards-track control protocol it enables multi-vendor interoperability and the following benefits:
  • Control plane learning for end host Layer-2 and Layer-3 reachability information to build more robust and scalable VXLAN overlay networks.
  • Leverages the decade-long MP-BGP VPN technology to support scalable multi-tenant VXLAN overlay networks.
  • EVPN address family carries both Layer 2 and Layer 3 reachability information. This provides integrated bridging and routing in VXLAN overlay networks.
  • Minimizes network flooding through protocol-driven host MAC/IP route distribution and ARP suppression on the local VTEPs.
  • Provides optimal forwarding for east-west and north-south bound traffic with the distributed anycast function
  • Provides VTEP peer discovery and authentication which mitigates the risk of rouge VTEPs in the VXLAN overlay network.
Now you no longer have to be limited to one controller, one vSwitch, and one SDN domain.  Leveraging MP-BGP EVPN control plane for VXLAN can create independent exchanges of layer 2 and layer 3 reachability information across overlays, VXLAN gateways, DC or WAN devices, and dramatically improves scale as MP-BGP EVPN control plane for VXLAN is a distributed to control plane not limited to the scale implications or the lock-in control and development of one schema.  Cisco Nexus 9000 with NXOS, Cisco ACI, and vSwtiches all integrate or directly support MP-BGP EVPN control plane for VXLAN and is an expansion to the open choices customers have for SDN from Cisco.

So what should you be asking from your vendors?  Every VTEP in your network should have the ability to integrate or support MP-BGP EVPN control plane for VXLAN, and it should be in every RFP. You should ensure each API is fully published without 3rd party vendor’s being restricted from accessing or integrating with these API’s – this includes vSwitches inside of the hypervisor, top of rack switches, and layer 4 – 7 appliances.

In the transformation of traditional IT models to supporting DevOps and Cloud operations, vendor’s willingness to cooperate varies over time. Leveraging standards-track protocols such as MP-BGP EVPN control plane for VXLAN and keeping the API’s fully published, ensures the customer is no longer trapped by one vendor’s implementation and the customer can drive their own integration or automation by calling the URI objects delivered through open and published RESTful API’s.

A Better Way to Private Cloud

Organizations are realizing that without a formal comprehensive cloud strategy, line of business and application architects will continue to sidestep their internal IT organizations and procure solutions on their own – an industry phenomenon known as “rogue IT” which happens out of necessity.  While it helps solve the immediate problem, it brings with it a host of complications – compliance, governance, financial, security and more.

A formal cloud strategy helps ease cloud service adoption, drives standardization of services and increases the value of IT to the business.  For all these reasons, cloud is now considered a core element in many enterprise IT portfolios.

Yet for all the strategizing and trial steps taken by organizations, only a small number of companies have implemented true private clouds.   This is because automation is challenging but not as challenging as maintaining the manual and siloed methods used to manage the data center today.

People need deeper knowledge about automation. They have to understand the types of automation available. They want clear insight into the short-term and future impact of automation decisions made today so that they can create the right strategy for their business and select the appropriate automation methods and technologies to support their strategy.  Without the right tools and approaches to automation adoption, most organizations experience pain and chaos.

Increase your automation knowledge by attending for this upcoming live webcast featuring Dave Bartoletti of Forrester together with automation and cloud experts from Cisco.

Webcast Title:   A Better Way to Private Cloud

Date:     Tuesday, March 10, 2015

Time: 11 am Eastern/8 am Pacific

Register

What you will learn:

  • How a pragmatic, stepwise adoption of automation accelerates adoption of cloud services within your organization
  • How solutions engineered for hybrid-ready private cloud enable your organization to capture new revenue opportunities with on-demand delivery of applications and their supporting infrastructure
  • How Cisco ONE Enterprise Cloud Suite offers you cloud strategy, automation, and management options

Developers, end users and customers expect continuous delivery and automation is the crucial element to making this happen.  Join us for this live webcast and hear how Cisco can let your business soar and take advantage of new business opportunities.

The number of attendees is limited so register today.

 

Tags: , , , , , , , , ,

Cisco Voted Top Platinum Sponsor – JDE Summit 2015

The JD Edwards Summit brings together over 700 business partners from around the world for a week of product updates, industry discussions and intense training.  A number of vendors were asked to present solutions that are specific to this customer base.  Cisco and Nimble Storage presented the UCS Mini based SmartStack solution where the entry configuration is street priced at $79,500 and supports up to 1400 concurrent users. Read More »

Tags: , , , ,

Getting Started with Cisco Intercloud Fabric and Hybrid Cloud

When I hear “hybrid” I think about cars. Those gas and electric cars that can switch to whichever power source is needed when it is needed the most or makes the most economical sense. The switch is, or at least should not be noticeable. Having been in a hybrid car I’ve experienced the switchover, the interesting thing is that the car itself does not change. The controls are the same; the car steers and moves that same way it did before. I don’t have to learn anything new or make some changes to the way I drive to continue to use the car.

That’s the way hybrid cloud should be, whether I’m using the private cloud in my enterprise or I’m using IT managed provider clouds. If the workload is completely in the provider cloud, split between the provider cloud and private cloud or completely in the private cloud, it really should make no difference to the workload… or me.

How true are those scenarios though? As soon as part of the workload is in the provider cloud things need to change. Application admins and network admins surely have already been enlisted to figure out how the workload applications can function in the provider cloud and still interact with the private cloud. What services does the workload need? How does workload security work? How does workload routing work? How does the hybrid cloud environment impact the workload? How many different cloud provider APIs will need to be leafed? These are only a few of the considerations there can be many more.

Read More »

Tags: ,