John Chambers with Satya Nadella at ACI Launch
From the beginning Microsoft has been a strategic partner with Cisco in the development of our Application Centric Infrastructure (ACI) technologies and solutions. In fact Cisco CEO John Chambers shared the stage with Microsoft’s Satya Nadella at the ACI launch several months ago in New York City.
ACI itself in the data center is a holistic architecture with centralized automation and policy-driven application profiles. ACI delivers software flexibility with the scalability of hardware performance. Traditionally, IT approaches took a siloed operational view, with no common operational model between the application, network, security, and cloud teams. With ACI, a common network operational model delivers IT application agility, simplified operations across teams, assured network and application performance, and scale. Read More »
Tags: #CiscoACI, ACI, Cisco Nexus, Cisco UCS, Exchange, Hyper-V, Microsoft, Microsoft Sharepoint
Given the tremendous interest in VXLAN with MP-BGP based EVPN Control-Plane (short EVPN) at Cisco Live in Milan, I decided to write a “short” technology brief blog post on this topic.
VXLAN (IETF RFC7348) has been designed to solve specific problems faced with Classical Ethernet for a few decades now. By introducing an abstraction through encapsulation, VXLAN has become the de-facto standard overlay of choice in the industry. Chief among the advantages provided by VXLAN; extension of the todays limited VLAN space and the increase in the scalability provided for Layer-2 Domains.
Extended Namespace – The available VLAN space from the IEEE 802.1Q encapsulation perspective is limited to a 12-bit field, which provides 4096 VLANs or segments. By encapsulating the original Ethernet frame with a VXLAN header, the newly introduced addressing field offers 24-bits, thereby providing a much larger namespace with up to 16 Million Virtual Network Identifiers (VNIs) or segments.
While the VXLAN VNI allows unique identification of a large number of tenant segments which is especially useful in high-scale multi-tenant deployments, the problems and requirements of large Layer-2 Domains are not sufficiently addressed. However, significant improvements in the following areas have been achieved:
- No dependency on Spanning-Tree protocol by leveraging Layer-3 routing protocols
- Layer-3 routing with Equal Cost Multi-Path (ECMP) allows all available links to be used
- Scalability, convergence, and resiliency of a Layer-3 network
- Isolation of Broadcast and Failure Domains
IETF RFC7348 – VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks
Scalable Layer-2 Domains
The abstraction by using a VXLAN-like overlay does not inherently change the Flood & Learn behavior introduced by Ethernet. In typical deployments of VXLAN, BUM (Broadcast, Unicast, Multicast) traffic is forwarded via layer-3 multicast in the underlay that in turn aids in the learning process so that subsequent traffic need not be subjected to this “flood” semantic. A control-plane is required to minimize the flood behavior and proactively distribute End-Host information to participating entities (typically called Virtual Tunnel End Points aka VTEPs) in the same segment – learning.
Control-plane protocols are mostly employed in the layer-3 routing space where predominantly IP prefix information is exchanged. Over the past years, some of the well-known routing protocols have been extended to also learn and exchange Layer-2 MAC addresses. An early technology adoption with MAC addresses in a routing-protocol was Cisco’s OTV (Overlay Transport Virtualization), which employed IS-IS to significantly reduce flooding across Data Center Interconnects (DCI).
Multi-Protocol BGP (MP-BGP) introduced a new Network Layer Reachability Information (NLRI) to carry both, Layer-2 MAC and Layer-3 IP information at the same time. By having the combined set of MAC and IP information available for forwarding decisions, optimized routing and switching within a network becomes feasible and the need for flood to do learning get minimized or even eliminated. This extension that allows BGP to transport Layer-2 MAC and Layer-3 IP information is called EVPN – Ethernet Virtual Private Network.
EVPN is documented in the following IETF drafts
Integrated Route and Bridge (IRB) – VXLAN-EVPN offers significant advantages in Overlay networking by optimizing forwarding decision within the network based on Layer-2 MAC as well as Layer-3 IP information. The decision on forwarding via routing or switching can be done as close as possible to the End-Host, on any given Leaf/ToR (Top-of-Rack) Switch. The Leaf Switch provides the Distributed Anycast Gateway for routing, which acts completely stateless and does not require the exchange of protocol signalization for election or failover decision. All the reachability information available within the BGP control-plane is sufficient to provide the gateway service. The Distributed Anycast Gateway also provides integrated routing and bridging (IRB) decision at the Leaf Switch, which can be extended across a significant number of nodes. All the Leaf Switches host active default gateways for their respective configured subnets; the well known semantic of First Hop Routing Protocols (FHRP) with active/standby does not apply anymore.
Summary – The advantages provided by a VXLAN-EVPN solution are briefly summarized as follows:
- Standards based Overlay (VXLAN) with Standards based Control-Plane (BGP)
- Layer-2 MAC and Layer-3 IP information distribution by Control-Plane (BGP)
- Forwarding decision based on Control-Plane (minimizes flooding)
- Integrated Routing/Bridging (IRB) for Optimized Forwarding in the Overlay
- Leverages Layer-3 ECMP – all links forwarding – in the Underlay
- Significantly larger Name-Space in the Overlay (16M segments)
- Integration of Physical and Virtual Networks with Hybrid Overlays
- It facilitates Software-Defined-Networking (SDN)
Simply formulated, VXLAN-EVPN provides a standards-based Overlay that supports Segmentation, Host Mobility, and High Scale.
VXLAN-EVPN is available on Nexus 9300 (NX-OS 7.0) with Nexus 7000/7700 (F3 linecards) to follow in the upcoming major release. Additional Data Center Switching platforms, like the Nexus 5600, will follow shortly after.
A detailed whitepaper on this topic is available on Cisco.com. In addition, VXLAN-EVPN was featured during the following Cisco Live! Sessions.
Do you have appetite for more? Post a comment, tweet about it and have the conversation going … Thanks for reading and Happy Networking!
Tags: #CLEUR, Cisco, cisco live, Cisco Nexus, Cisco Nexus 9000, data center, EVPN, ietf, network, nexus, rfc7348, SDN, VXLAN
If you are involved in designing, supporting or managing a data center, you will undoubtedly rely on technical support services from one or more vendors. Running your data center, there is always the risk of a hardware failure or being impacted by a software defect. While relatively rare, hardware does occasionally fail unfortunately. However you undoubtedly have technical support in place to deal with such problems. You may have invested in a few extra switches as backup, you may also have failover mechanisms in place. Almost certainly you will have a support contract in place with your Cisco partner or with Cisco, so you have break/fix expertise on tap for when something goes wrong. This is critical support for your business, no debate from me.
Engineer Under Stress!
Now, arguably the most important resource you have in your data center is not so much individual switches, routers or servers. It’s your engineers, those who design and support your data center. If they have a problem, where and how do they get help? Who helps them when they are stretched? When business pressures are telling? Of course, their colleagues and managers can and will help. Where, however, can they tap into additional sources of expertise so that they can become even more productive for you? This is where Cisco Optimization Services come in – including our award-winning Cisco Network Optimization Service (or “NOS” for short), Collaboration Optimization Service, and the one I’m involved with, Cisco Data Center Optimization Services.
Read More »
Tags: ACI, architecture, Cisco Nexus, Cisco UCS, cisco_services, data_center, OpenStack, optimization, SDN
The Cisco Nexus 1000V has been supported in VMware vSphere hypervisor since 4.0 release (August 2009) up to the current vSphere release 5.5 update 2. We are happy to announce that the Nexus 1000V will continue to be supported in the latest vSphere 6 release which VMware recently announced. Customers who are currently running Nexus 1000V will be able to upgrade to the vSphere 6 release and the new vSphere 6 customers will have the Nexus 1000V as part of their choices for virtual networking.
Cisco is fully committed to support the Nexus 1000V product for our 10,000+ Advanced Edition customers and the thousands more using the Essential Edition software in all future releases of VMware vSphere. Cisco has a significant virtual switching R&D investment with hundreds of engineers dedicated to the Nexus 1000V platform. The Nexus 1000V has been the industry’s leading virtual switching platform with innovations on VXLAN (industry’s first shipping VXLAN platform), and distributed zone firewall (via Virtual Security Gateway released in Jan 2011).
The Nexus 1000V also continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM.
In the last major release of the Nexus 1000V for vSphere, version 3.1 (August 2014) we added significant scaling and security features and we continue to provide subsequent updates (December 2014) with the next release planned for March 2015. The recently released capabilities include:
- Increased scale per Nexus 1000V:
- 250 hosts
- 10,000 virtual ports
- 1,000 virtual ports per host
- 6,000 VXLAN segments with ability to scale out via BGP
- Increased security and visibility
- Seamless security policy from campus and WAN to datacenter with Cisco TrustSec tagging/enforcement capabilities
- Distributed port-security for scalable anti-spoofing deployment
- Enhanced L2 security and loop prevention with BPDU Guard
- Protection against broadcast storms and or attacks with Storm control
- Scalable flow accounting and statistics with Distributed Netflow
- Ease of management via Virtual Switch Update Manager (VSUM) – a vSphere web-client plug-in
One of the common questions coming from our customers is whether VMware is still re-selling and supporting the Nexus 1000V via VMware support?
VMware has decided to no longer offer Nexus 1000V through VMware sales or sell support for the Nexus 1000V through the VMware support organization as of Feb 2nd 2015. We want to reiterate that this has NO IMPACT on the availability and associated support from Cisco for the Nexus 1000V running in a vSphere environment. Cisco will continue to sell Nexus 1000V and offer support contracts. Cisco encourages customers who are currently using VMware support for the Nexus 1000V to migrate their support contracts to Cisco by contacting their local Cisco Sales team to aide in this transition.
For questions or help, please reach out email@example.com
Tags: ACI, Cisco Nexus, Cisco UCS, Nexus1000V, VMware, VMware vSphere, vsg, vsphere 6, VXLAN
Over the last 12 months I’ve been doing a lot of work that has involved the Cisco Nexus 1000v, and during this time I came to realise that there wasn’t a huge amount of recent information available online about it.
Because of this I’m going to put together a short post covering what the 1000v is, and a few points around it’s deployment.
What is the Nexus 1000v?
The blurb on the VMware website defines the 1000v as “..a software switch implementation that provides an extensible architectural platform for virtual machines and cloud networking.”, and the Cisco website says, “This switch: Extends the network edge to the hypervisor and virtual machines, is built to scale for cloud networks, forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)”
So that’s all fine and good, but what does this mean for us? Well, the 1000v is a software only switch that sits inside the ESXi (and KVM or Hyper-V, if they’re your poison) Hypervisor that leverages VMware’s built-in Distributed vSwitch functionality.
Read More »
Tags: #ciscochampion, Cisco Nexus, Nexus 1000v