Cisco Blogs


Cisco Blog > Data Center

IT Business Leaders Open Up at ONUG

This week, May 13-14, ONUG, or the Open Networking User Group, will meet at Columbia University’s Alfred Lerner Hall in New York City, NY.

Columbia-Campus_SM-300x200

ONUG is the leading user-driven community of IT Business Leaders, CTOs, network architects, especially including those implementing SDN, who are focused on leveraging the power of their engineering and procurement to influence the pace and deployment of open networking solutions.

ONUG

If you are planning on attending, I’d like to provide you with a quick overview of the activities Cisco will be participating in at the Open Networking User Group.

On conference day 1, May 13, the SD-WAN and the Virtual Network Overlay Working Groups will present their top ten findings and present their work.

Check out the SD-WAN Working Group Update with Cisco speaker, Steve Wood, Principal Engineer, Enterprise Routing, from 10:00-10:45 am.

Then during the Technology Showcase Break, meet Sumanth Kakaraparthi, Product Manager, Enterprise Routing and Bill Reilly, Technical Marketing Engineer, Enterprise Routing who will deliver an IWAN/SD-WAN Demo at the Cisco demo station.

Next, attend the Virtual Networks/Overlays Working Group Update with Cisco speaker, Mike Cohen, Director of Product Management, Insieme Networks, on May 13 from 12:00-12:45 pm.

Following these updates will be a luncheon presentation: “Faster WAN Delivery: Software Defined WAN-as-a-Service” on May 13 from 1:30-2:30 pm delivered by Cisco speaker, Jeff Reed, VP, Enterprise Infrastructure and Solutions Group.  Jeff will be joined by partner speakers: Jeff Gray, Glue Networks CEO and Matt Cook, Forsythe Sr. Director – Network & Workspace Solutions.

From 4:05-5:00pm, there will be a lively debate on “Closed vs. Open Source Software” moderated by Ernest Lefner, Bank of America, between Charles Giancarlo, Silver Lake, taking the Pro Closed position and Lew Tucker, Cisco VP/CTO for Openstack, taking the Pro Open position.  You can carry on the debates yourselves afterwards at the Cocktail Reception from 5:00-7:00.

The next day on May 14 from 2:45-3:45 pm there will be a Town Hall Meeting with leaders from Facebook, Ansible, Nuage, vArmour and our own, Mike Dvorkin, Cisco Distinguished Engineer, Insieme Networks, who will all speak on “Will the DevOps Model Deliver in the Enterprise?”.

Finally, that evening join us at a Cisco Sponsored After Party from 5:00 – 9:00 pm.

For Further Information

Cisco Intelligent WAN

ONUG Blog – VXLAN Comes of Age with BGP-EVPN

MP-BGP eVPN control plane for VXLAN – SDN is growing up

Cisco Border Gateway Protocol Control Plane for Virtual Extensible LAN

VXLAN Network with MP-BGP EVPN Control Plane

Follow ONUG

LinkedIn Groups Open-Networking-User-Group

Twitter ONUG

OpenNetworkingUserGroup.com

Tags: , , , , ,

A Summary of Cisco VXLAN Control Planes: Multicast, Unicast, MP-BGP EVPN

With the adoption of overlay networks as the standard deployment for multi-tenant network, Layer2 over Layer3 protocols have been the favorite among network engineers. One of the Layer2 over Layer3 (or Layer2 over UDP) protocols adopted by the industry is VXLAN. Now, as with any other overlay network protocol, its scalability is tied into how well it can handle the Broadcast, Unknown unicast and Multicast (BUM). That is where the evolution of VXLAN control plane comes into play.

The standard does not define a “standard” control plane for VXLAN. There are several drafts describing the use of different control planes. The most commonly use VXLAN control plane is multicast. It is implemented and supported by multiple vendors and it is even natively supported in server OS like the Linux Kernel.

This post tries to summarize the three (3) control planes currently supported by some of the Cisco NX-OS/IOS-XR. My focus is more towards the Nexus 7k, Nexus 9k, Nexus 1k and CSR1000v.

Each control plane may have a series of caveats in their own, but those are not covered by this blog entry. Let’s start with some VXLAN definitions:

(1) VXLAN Tunnel Endpoint (VTEP): Map tenants’ end devices to VXLAN segments. Used to perform VXLAN encapsulation/de-encapsulation.
(2) Virtual Network Identifier (VNI): identify a VXLAN segment. It hast up to 224 IDs theoretically giving us 16,777,216 segments. (Valid VNI values are from 4096 to 16777215). Each segment can transport 802.1q-encapsulated packets, theoretically giving us 212 or 4096 VLANs over a single VNI.
(3) Network Virtualization Endpoint or Network Virtualization Edge (NVE): overlay interface configured in Cisco devices to define a VTEP

VXLAN with Multicast Control Plane
VXLAN1

Read More »

Tags: , , , , , ,

VXLAN/EVPN: Standards based Overlay with Control-Plane

Given the tremendous interest in VXLAN with MP-BGP based EVPN Control-Plane (short EVPN) at Cisco Live in Milan, I decided to write a “short” technology brief blog post on this topic.

VXLAN (IETF RFC7348) has been designed to solve specific problems faced with Classical Ethernet for a few decades now. By introducing an abstraction through encapsulation, VXLAN has become the de-facto standard overlay of choice in the industry. Chief among the advantages provided by VXLAN; extension of the todays limited VLAN space and the increase in the scalability provided for Layer-2 Domains.

Extended Namespace – The available VLAN space from the IEEE 802.1Q encapsulation perspective is limited to a 12-bit field, which provides 4096 VLANs or segments. By encapsulating the original Ethernet frame with a VXLAN header, the newly introduced addressing field offers 24-bits, thereby providing a much larger namespace with up to 16 Million Virtual Network Identifiers (VNIs) or segments.

 

 

 

While the VXLAN VNI allows unique identification of a large number of tenant segments which is especially useful in high-scale multi-tenant deployments, the problems and requirements of large Layer-2 Domains are not sufficiently addressed. However, significant improvements in the following areas have been achieved:

  • No dependency on Spanning-Tree protocol by leveraging Layer-3 routing protocols
  • Layer-3 routing with Equal Cost Multi-Path (ECMP) allows all available links to be used
  • Scalability, convergence, and resiliency of a Layer-3 network
  • Isolation of Broadcast and Failure Domains

IETF RFC7348 – VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks

Scalable Layer-2 Domains

The abstraction by using a VXLAN-like overlay does not inherently change the Flood & Learn behavior introduced by Ethernet. In typical deployments of VXLAN, BUM (Broadcast, Unicast, Multicast) traffic is forwarded via layer-3 multicast in the underlay that in turn aids in the learning process so that subsequent traffic need not be subjected to this “flood” semantic. A control-plane is required to minimize the flood behavior and proactively distribute End-Host information to participating entities (typically called Virtual Tunnel End Points aka VTEPs) in the same segment – learning.

Control-plane protocols are mostly employed in the layer-3 routing space where predominantly IP prefix information is exchanged. Over the past years, some of the well-known routing protocols have been extended to also learn and exchange Layer-2 MAC addresses. An early technology adoption with MAC addresses in a routing-protocol was Cisco’s OTV (Overlay Transport Virtualization), which employed IS-IS to significantly reduce flooding across Data Center Interconnects (DCI).

Multi-Protocol BGP (MP-BGP) introduced a new Network Layer Reachability Information (NLRI) to carry both, Layer-2 MAC and Layer-3 IP information at the same time. By having the combined set of MAC and IP information available for forwarding decisions, optimized routing and switching within a network becomes feasible and the need for flood to do learning get minimized or even eliminated. This extension that allows BGP to transport Layer-2 MAC and Layer-3 IP information is called EVPN – Ethernet Virtual Private Network.

EVPN is documented in the following IETF drafts

Integrated Route and Bridge (IRB) – VXLAN-EVPN offers significant advantages in Overlay networking by optimizing forwarding decision within the network based on Layer-2 MAC as well as Layer-3 IP information. The decision on forwarding via routing or switching can be done as close as possible to the End-Host, on any given Leaf/ToR (Top-of-Rack) Switch. The Leaf Switch provides the Distributed Anycast Gateway for routing, which acts completely stateless and does not require the exchange of protocol signalization for election or failover decision. All the reachability information available within the BGP control-plane is sufficient to provide the gateway service. The Distributed Anycast Gateway also provides integrated routing and bridging (IRB) decision at the Leaf Switch, which can be extended across a significant number of nodes. All the Leaf Switches host active default gateways for their respective configured subnets; the well known semantic of First Hop Routing Protocols (FHRP) with active/standby does not apply anymore.

Summary – The advantages provided by a VXLAN-EVPN solution are briefly summarized as follows:

  • Standards based Overlay (VXLAN) with Standards based Control-Plane (BGP)
  • Layer-2 MAC and Layer-3 IP information distribution by Control-Plane (BGP)
  • Forwarding decision based on Control-Plane (minimizes flooding)
  • Integrated Routing/Bridging (IRB) for Optimized Forwarding in the Overlay
  • Leverages Layer-3 ECMP – all links forwarding – in the Underlay
  • Significantly larger Name-Space in the Overlay (16M segments)
  • Integration of Physical and Virtual Networks with Hybrid Overlays
  • It facilitates Software-Defined-Networking (SDN)

Simply formulated, VXLAN-EVPN provides a standards-based Overlay that supports Segmentation, Host Mobility, and High Scale.

VXLAN-EVPN is available on Nexus 9300 (NX-OS 7.0) with  Nexus 7000/7700 (F3 linecards) to follow in the upcoming major release. Additional Data Center Switching platforms, like the Nexus 5600, will follow shortly after.

A detailed whitepaper on this topic is available on Cisco.com. In addition, VXLAN-EVPN was featured during the following Cisco Live! Sessions.

Do you have appetite for more? Post a comment, tweet about it and have the conversation going … Thanks for reading and Happy Networking!

Tags: , , , , , , , , , , , ,

Announcing Cisco Nexus 1000V for VMware vSphere 6 Release

The Cisco Nexus 1000V has been supported in VMware vSphere hypervisor since 4.0 release (August 2009) up to the current vSphere release 5.5 update 2.  We are happy to announce that the Nexus 1000V will continue to be supported in the latest vSphere 6 release which VMware recently announced. Customers who are currently running Nexus 1000V will be able to upgrade to the vSphere 6 release and the new vSphere 6 customers will have the Nexus 1000V as part of their choices for virtual networking.

Cisco is fully committed to support the Nexus 1000V product for our 10,000+ Advanced Edition customers and the thousands more using the Essential Edition software in all future releases of VMware vSphere. Cisco has a significant virtual switching R&D investment with hundreds of engineers dedicated to the Nexus 1000V platform.  The Nexus 1000V has been the industry’s leading virtual switching platform with innovations on VXLAN (industry’s first shipping VXLAN platform), and distributed zone firewall (via Virtual Security Gateway released in Jan 2011).

The Nexus 1000V also continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM.

In the last major release of the Nexus 1000V for vSphere, version 3.1 (August 2014) we added significant scaling and security features and we continue to provide subsequent updates (December 2014) with the next release planned for March 2015. The recently released capabilities include:

  • Increased scale per Nexus 1000V:
    • 250 hosts
    • 10,000 virtual ports
    • 1,000 virtual ports per host
    • 6,000 VXLAN segments with ability to scale out via BGP
  • Increased security and visibility
    • Seamless security policy from campus and WAN to datacenter with Cisco TrustSec tagging/enforcement capabilities
    • Distributed port-security for scalable anti-spoofing deployment
    • Enhanced L2 security and loop prevention with BPDU Guard
    • Protection against broadcast storms and or attacks with Storm control
    • Scalable flow accounting and statistics with Distributed Netflow
  • Ease of management via Virtual Switch Update Manager (VSUM) – a vSphere web-client plug-in

One of the common questions coming from our customers is whether VMware is still re-selling and supporting the Nexus 1000V via VMware support?

VMware has decided to no longer offer Nexus 1000V through VMware sales or sell support for the Nexus 1000V through the VMware support organization as of Feb 2nd 2015.  We want to reiterate that this has NO IMPACT on the availability and associated support from Cisco for the Nexus 1000V running in a vSphere environment.  Cisco will continue to sell Nexus 1000V and offer support contracts. Cisco encourages customers who are currently using VMware support for the Nexus 1000V to migrate their support contracts to Cisco by contacting their local Cisco Sales team to aide in this transition.

For questions or help, please reach out nexus1000vinfo@cisco.com

Tags: , , , , , , , ,

Video Demo: The Power of ACI Physical Network Visibility in an SDN Overlay Environment

[Note: Register today for our upcoming live ACI webcast: “Is Your Data Center Ready for the Application Economy”, January 13, 2015, 9 AM PT, Noon ET, featuring ACI customers and several key ACI technology partners.]

At the most recent Gartner Data Center Conference in Las Vegas, after some insightful discussions with customers and analysts, we came up with a great demo idea and proof point that highlights a key feature in our Application Centric Infrastructure (ACI) platform. This particular demo centers on the unique visibility of the ACI Fabric to faults in the underlying physical network.

Joe Onisick, Principal Engineer in the ACI team at Cisco, compares this ability in ACI to SDN technologies that employ only virtual overlay networks in the following video. With overlay networks, such as a VXLAN tunnel, the resulting virtual network (and all the management and analytics tools) has a much harder time isolating faults within the physical infrastructure. The overlay is designed to “tunnel” through the physical network, simplifying and obscuring the physical topology and issues with any specific network node. Before going much further, I’ll let Joe provide the details in this quick, 3 minute video:

Read More »

Tags: , , , ,