Cisco has a broad spectrum of customers across a wide range of markets and geographies. These customers have a diverse set of requirements, operational models and use cases, meaning that a one size fits all SDN strategy does not fit all our customers. As a result, we made a series of announcements earlier this summer (at Cisco Live San Diego) that continued to showcase how our SDN strategy provides customers with a high degree of choice and flexibility. This blog will review key elements of the strategy, as well as provide a bit of background and context around them.
Cisco’s SDN strategy for the Data Center is built on 3 key pillars:
- Application Centric Infrastructure (ACI)
- Programmable Fabric
- Programmable Network
This approach enables our customers to choose the implementation option that best meets their IT and business goals by extending the benefits of programmability and automation across the entire Nexus switching portfolio. Let’s consider each of these pillars.
A lot has been said and written about ACI already, so I’ll keep this section on ACI brief. ACI is Cisco’s flagship SDN offering. It offers the most comprehensive SDN solution in the industry. Based on an application centric policy model, ACI provides automated, integrated provisioning of both underlay and overlay networks, L4-7 services provisioning across a broad set of ecosystem partners, and extensive telemetry for application level health monitoring. These comprehensive capabilities deliver a solution that is agile, open, and secure, offering customers benefits no other SDN solution can.
I know the paragraph above was a bit of a mouthful. For a quick snapshot of what that all translates to in terms of actually helping a customer, check out this report from IDC. If you want to learn more about ACI, go here.
This pillar is all about providing scale and simplicity to VXLAN Overlays. Beyond that, it provides a clear path forward for the overall Nexus portfolio to participate in and derive the benefits of SDN.
VXLAN has gained huge momentum across the industry for a wide variety of reasons that, in many cases, involve improvements over traditional technologies such as VLANs and Spanning Tree. These involve attributes such as more efficient bandwidth use via Equal Cost Multi Pathing (ECMP), higher theoretical scalability with 16 million segments, and more flexibility through use of an overlay model upon which multi tenant cloud networks can be built. As momentum for VXLAN networks grows, so does the demand for 2 key things:
- A standards based approach to scale out VXLANs, and
- Simplified provisioning and management of them.
Regarding a standards based approach to scale out VXLANs, Cisco is now supporting “Multipoint BGP EVPN Control Plane” on Nexus switches. Why does this matter? Well, the original VXLAN spec (RFC 7348) relied on a multicast based flood-and-learn mechanism without a control plane for certain key functions (e.g. VTEP peer discovery and remote end host reachability). This is a suboptimal approach. To overcome the limitations inherent with this approach, the IETF developed MP BGP EVPN Control Plane as a standards-based control plane for VXLAN overlays. This reduces traffic flooding on the overlay network, yielding a more efficient and more scalable approach.
As far as the second item, simplified provisioning and management, Cisco announced an overlay management and provisioning system. This new solution, called Virtual Topology System (VTS), automates provisioning of the overlay network, so as to enhance the deployment of cloud based services. Through an automated overlay provisioning model and tight integration with 3rd party orchestration tools such as OpenStack and VMWare VCenter, VTS simplifies overlay provisioning and management for both physical and virtual workloads by eliminating manually intensive network configuration tasks. These whiteboard sessions provide an overview and also a bit more technical detail, if you’re interested.
Infrastructure programmability is a big deal because it drives automation, which drives speed, which is an obvious prerequisite for the success of just about any business dealing with digital disruption. As programmability evolves, Cisco continues to roll out more and more capabilities across the Nexus portfolio. We have a broad range of features in this space including things such as Programmable Open APIs, integration with 3rd party DevOps and Automation tools, Custom App Development, and Bash shell commands. This set of capabilities within NX-OS facilitates the concept of the Programmable Network pillar. Let’s consider how this may be useful for you.
A while ago, a small number of customers with very large networks started shifting the way they operated. Their networks were growing very large because (not too surprisingly) the number of users, thus servers, was growing very large. As the number of servers grew larger and faster, they realized they had a choice:
- Hire a zillion new sys admins, or
- Brutally overwork their existing sys admins, or
- Deploy and manage servers in new and different ways.
The last option won out (in many cases, anyhow), and the revelation was automation. That is, tools that automated server deployment and management helped these sys admins and their employer’s scale the business. In the process, they paid close attention to metrics like the number of servers a given admin was managing. These “device to admin” ratios went up a lot…like in some cases orders of magnitude. With automation tools and other changes (to culture, process, etc.), some companies saw admins managing not 10’s or 100’s of servers, but 1000’s of servers. They also started experimenting with and employing DevOps – a term that at this point has a multitude of meanings, but is defined here in simple English.
As these elements have converged, people across different silos have started to collaborate a bit more, and as a result, tips, tricks and tools have started to spill across the silos. So, for example, as sys admins saw efficiency gains from using tools like Puppet and Chef to automate tasks on their servers, there was a desire to use the same tools on networks. In other cases, someone who was comfortable with Linux and wanted to work from a Bash shell wanted to use those commands for configuration and troubleshooting on the network as well as servers. Others wanted APIs that would allow extraction of all sorts of arcane box info to be massaged and acted upon by scripts and other tools.
Essentially, there was a need for more elements of the box to be more accessible and programmable in a wide variety of ways. It’s worth noting that although these trends started with a small subset of customers, many of the elements are working their way out to a much broader, more diverse cross section of customers. As this evolution has occurred, Cisco has been adding more programmability to the Nexus switches. This paper provides a more detailed view of various use cases and the functionality Nexus provides.
In summary, these 3 pillars of ACI, Programmable Fabric and Programmable Network provide a wide range of capabilities to help our customers across the broad spectrum of challenges they have. In the coming weeks and months, we’ll provide more information – here, as well as other venues – to help you better understand the strategy and its components. If this blog was too geeky and you’re looking for upleveled info, we’ll have that. If this was too fluffy, and you want more technical depth, we’ll have that as well. To punctuate this point, I’ll be hosting a webinar on September 15 that will cover the above in more detail. You can register here.
Tags: ACI, automation, Cisco Nexus, cloud, data center, Nexus switching, Programmable, VXLAN
As customers embrace cloud strategy to build an agile data center, one of the key pillars is openness. Why openness? To move fast, accelerate time-to-market, drive higher level of innovation and avoid vendor lock-in are some of the benefits to openess. What does open mean? In this case, open source, open standards, open interfaces, open API’s, open tools set including automation, orchestration and DevOps.
Come and join Cisco at ONS June 15-18, 2015 to learn how we’ve been in the forefront developing and contributing to the open source community. Hear our speakers Tom Edsall Data Center SDN Solutions June 18 @ 2:00 pm, Mike Cohen at the partner theater June 18 @ 12:40 pm and others
See demos in Cisco’s booth on OpenStack, Group Based Policy GBP that enables capturing application requirements directly rather than converting the requirements into a set of infrastructure configuration updates, Open Dayligt, and more. In the solutions showcase section, you’ll see a service chaining demo with Avi and One Convergence.
What else are we doing to drive openness in the data center? BGP-EVPN control plane to define how VxLAN tunnel endpoints map MAC addressed to IP addresses in a multi-vendor environment, Network Service Header NSH offering a method to identify network service path, OpFlex is an extensible policy protocol designed to exchange abstract policy between a network controller and a set of smart devices capable of rendering policy, open SDN with ACI and many more.
Tags: ACI and Open Stack, BGP-EVPN, NSH, ons, OpFlex, SDN, VXLAN
Interest in Software Defined Networking (SDN) continues to grow through the ability to make networks more programmable, flexible and agile. This is accomplished by accelerating application deployment and management, simplifying automating network operations and creating a more responsive IT model.
Cisco is extending its leadership in SDN and Data Center Automation solutions with the announcement today of Cisco Virtual Topology System (VTS), which improves IT automation and optimizes cloud networks across the entire Nexus switching portfolio. Cisco VTS focuses on the management and automation of VXLAN-based overlay networks, a critical foundation for both enterprise private clouds and service providers. The announcement of the VTS overlay management system follows on Cisco’s announcement earlier this year supporting the EVPN VXLAN standard, which underlies the VTS solution.
Cisco VTS extends the Cisco SDN strategy and portfolio, which includes Cisco Application Centric Infrastructure (ACI), as well Cisco’s programmable NX-OS platforms, to a broader market and for additional use cases, which includes our massive installed base of Nexus 2000-7000 products, and to customers whose primary SDN challenge is in the automation, management and ongoing optimization of their virtual overlay infrastructure. With support for the EVPN VXLAN standard, VTS furthers Cisco’s commitment to open SDN standards, and increases interoperability in heterogeneous switching environments, with third-party controllers, and with cloud automation tools that sit on top of the open northbound API’s of the VTS controller.
Read More »
Tags: ACI, application centric infrastructure, EVPN, SDN, Virtual Topology System, VTS, VXLAN
This week, May 13-14, ONUG, or the Open Networking User Group, will meet at Columbia University’s Alfred Lerner Hall in New York City, NY.
ONUG is the leading user-driven community of IT Business Leaders, CTOs, network architects, especially including those implementing SDN, who are focused on leveraging the power of their engineering and procurement to influence the pace and deployment of open networking solutions.
If you are planning on attending, I’d like to provide you with a quick overview of the activities Cisco will be participating in at the Open Networking User Group.
On conference day 1, May 13, the SD-WAN and the Virtual Network Overlay Working Groups will present their top ten findings and present their work.
Check out the SD-WAN Working Group Update with Cisco speaker, Steve Wood, Principal Engineer, Enterprise Routing, from 10:00-10:45 am.
Then during the Technology Showcase Break, meet Sumanth Kakaraparthi, Product Manager, Enterprise Routing and Bill Reilly, Technical Marketing Engineer, Enterprise Routing who will deliver an IWAN/SD-WAN Demo at the Cisco demo station.
Next, attend the Virtual Networks/Overlays Working Group Update with Cisco speaker, Mike Cohen, Director of Product Management, Insieme Networks, on May 13 from 12:00-12:45 pm.
Following these updates will be a luncheon presentation: “Faster WAN Delivery: Software Defined WAN-as-a-Service” on May 13 from 1:30-2:30 pm delivered by Cisco speaker, Jeff Reed, VP, Enterprise Infrastructure and Solutions Group. Jeff will be joined by partner speakers: Jeff Gray, Glue Networks CEO and Matt Cook, Forsythe Sr. Director – Network & Workspace Solutions.
From 4:05-5:00pm, there will be a lively debate on “Closed vs. Open Source Software” moderated by Ernest Lefner, Bank of America, between Charles Giancarlo, Silver Lake, taking the Pro Closed position and Lew Tucker, Cisco VP/CTO for Openstack, taking the Pro Open position. You can carry on the debates yourselves afterwards at the Cocktail Reception from 5:00-7:00.
The next day on May 14 from 2:45-3:45 pm there will be a Town Hall Meeting with leaders from Facebook, Ansible, Nuage, vArmour and our own, Mike Dvorkin, Cisco Distinguished Engineer, Insieme Networks, who will all speak on “Will the DevOps Model Deliver in the Enterprise?”.
Finally, that evening join us at a Cisco Sponsored After Party from 5:00 – 9:00 pm.
For Further Information
Cisco Intelligent WAN
ONUG Blog – VXLAN Comes of Age with BGP-EVPN
MP-BGP eVPN control plane for VXLAN – SDN is growing up
Cisco Border Gateway Protocol Control Plane for Virtual Extensible LAN
VXLAN Network with MP-BGP EVPN Control Plane
LinkedIn Groups Open-Networking-User-Group
Tags: EVPN, IWAN, ONUG, Open Networking, SDN, VXLAN
With the adoption of overlay networks as the standard deployment for multi-tenant network, Layer2 over Layer3 protocols have been the favorite among network engineers. One of the Layer2 over Layer3 (or Layer2 over UDP) protocols adopted by the industry is VXLAN. Now, as with any other overlay network protocol, its scalability is tied into how well it can handle the Broadcast, Unknown unicast and Multicast (BUM). That is where the evolution of VXLAN control plane comes into play.
The standard does not define a “standard” control plane for VXLAN. There are several drafts describing the use of different control planes. The most commonly use VXLAN control plane is multicast. It is implemented and supported by multiple vendors and it is even natively supported in server OS like the Linux Kernel.
This post tries to summarize the three (3) control planes currently supported by some of the Cisco NX-OS/IOS-XR. My focus is more towards the Nexus 7k, Nexus 9k, Nexus 1k and CSR1000v.
Each control plane may have a series of caveats in their own, but those are not covered by this blog entry. Let’s start with some VXLAN definitions:
(1) VXLAN Tunnel Endpoint (VTEP): Map tenants’ end devices to VXLAN segments. Used to perform VXLAN encapsulation/de-encapsulation.
(2) Virtual Network Identifier (VNI): identify a VXLAN segment. It hast up to 224 IDs theoretically giving us 16,777,216 segments. (Valid VNI values are from 4096 to 16777215). Each segment can transport 802.1q-encapsulated packets, theoretically giving us 212 or 4096 VLANs over a single VNI.
(3) Network Virtualization Endpoint or Network Virtualization Edge (NVE): overlay interface configured in Cisco devices to define a VTEP
VXLAN with Multicast Control Plane
Read More »
Tags: #ciscochampion, Cisco Nexus 9000, CSR1000v, Nexus 1000, Nexus 7000, Nexus 9000, VXLAN