I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”. Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.
The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.
Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.
Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance -- aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.
ACI and Containers
Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.
The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.
You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.
As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.
With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.
For more information refer:
Tags: ACI Policy Model, bare metal, Cisco ACI, Cisco APIC, docker, Linux Containers, opflex protocol, virtual machines
Customers gain great value from server virtualization in the form of virtual machines (VM) and more recently Linux Containers /Dockers in data centers, clouds and branches. By some estimates, more than 60 % of the workloads are virtualized although less than 16% of the physical servers (IDC) are virtualized (running a hypervisor). From a networking perspective, the hypervisor virtual switch on these virtualized servers plays a critical component in all current and future data center, cloud, and branch designs and solutions
As we count down to the annual VMworld conference and reflect on the introduction of the Cisco Nexus 1000V in vSphere 4.0 six years ago, we can feel proud of what we have achieved. We have to congratulate VMware for their partnership and success in opening vSphere networking to third party vendors. It was beneficial for our joint customers, and for both companies. VMware and Cisco could be considered visionaries in this sense. Recognizing this success, the industry has followed.
Similarly we praise Microsoft as well, for having also provided an open environment for third-party virtual switches within Hyper-V, which has continued gaining market share recently. Cisco and Microsoft (along with other industry players) are leading the industry with the latest collaboration on submitting the OpFlex control protocol to the IETF. Microsoft’s intention to enable OpFlex support in their native Hyper-V virtual switch enables standards-based interaction with the virtual switches. Another win for customers and the industry.
In KVM and Xen environments, many organizations have looked at Open vSwitch (OVS) as an open source alternative. There is an interest in having richer networking than the standard Linux Bridge provides, or using OVS as a component for implementing SDN-based solutions like network virtualization. We think that there is an appetite for OVS on other hypervisors as well. Cisco is also committed to contributing and improving these open source efforts. We are active contributors in the Open Virtual Switch project and diligently working to open source our OpFlex control protocol implementation for OVS in the OpenDaylight consortium.
To recap on the thoughts from above, Table 1 provides a quick glance at the options for virtual networking from multiple vendors as of today:
Table 1: Hypervisors and Choices in Virtual Switches
3-party or OpenSource vSwitch
•Distributed Virtual Switch
•Cisco Application Virtual Switch
•IBM DVS 5000V
•HP Virtual Switch 5900V
|Native Hyper-v Switching
|Linux Bridge(some distributions include OVS natively)
|OVS -- open source project with multiple contributions from different vendors and individuals
As an IT Professional, whether you are running workloads on Red Hat KVM, Microsoft Hyper-V or VMware vSphere, it is difficult to imagine not having a choice of virtual networking. For many customers, this choice still means using the hypervisor’s native vSwitch. For others, it is about having an open source alternative, like OVS. And in many other cases, having the option of selecting an Enterprise-grade virtual switch has been key to increasing deployments of virtualization, since it enables consistent policies and network operations between virtual machines and bare metal workloads.
As can be seen in the table above, Cisco Nexus 1000V continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM. Currently, over 10,000 customers have selected this option with Cisco Nexus 1000V in either vSphere, Hyper-V, or KVM (or a combination of them).
Cisco is fully committed to the Nexus 1000V for vSphere, Hyper-V and KVM and also the Application Virtual Switch (AVS) for Application Centric Infrastructure (ACI), in addition to our open source contributions to OVS. Cisco has a large R&D investment in virtual switching, with a lot of talented engineers dedicated to this area, inclusive of those working on open-source contributions.
Nexus 1000V 3.0 release for vSphere is slated for August 2014 (general availability). This release addresses scale requirements of our increasing customer base, as well as an easy installation tool in the form of Cisco Virtual Switch Update Manager. The Cisco AVS for vSphere will bring the ACI policy framework to virtual servers. With ACI, customers will for the first time benefit from a true end-to-end virtual + physical infrastructure being managed holistically to provide visibility and optimal performance for heterogeneous hypervisors and workloads (virtual or physical). These innovations and choices are enabled by the availability of open choices in virtual switching within hypervisors.
As we look forward to VMworld next month, we are excited to continue the collaborative work with platform vendors VMware, Microsoft, Red Hat, Canonical, and the open source community to maintain and continue development of openness and choice for our customers. We are fully committed to this vision at Cisco.
Acknowledgement: Juan Lage (@juanlage) contributed to this blog.
Tags: application centric infrastructure, Application Virtual Switch, AVS, Canonical, KVM, Microsoft Hyper-V, Nexus1000V, open source, opendaylight, OpFlex, opflex protocol, OVS, RedHat, VMware vSphere, vmworld, vmworld 2014
Last week at Redhat Summit in San Francisco, Cisco Data center was well represented in speaking sessions, and solutions expo. I saw lots of traffic at our demo booth featuring Cisco ACI with OpenStack. Customers and Partners alike, showed great interest in how Cisco APIC integrates with OpenStack and enriches Data center operations. We showed the powerful capabilities of Cisco’s Neutron plug-in implementation and how workflow functions like, “create network”, “create subnets and vlan”, “create security groups”, etc. can be elegantly accomplished from the Open Stack console and aligned with the APIC object model via the APIC-Open Stack API integration. View Demo here: http://youtu.be/pWMXTb237Vk
ACI with OpenStack demo
We also presented in two sessions one titled “Deploying OpenStack with Cisco networking, compute, & storage” and the other “Automating Red Hat Enterprise Linux deployments with Cisco ACI & OpenStack”. We talked about plans to introduce the group policy model from ACI into OpenStack so that DevOps teams and NetOps teams can streamline and automate their work while focusing on application and tenant needs at a policy level.
The benefit will be that the Group Policy Plugin provides APIs to build Application Network Profiles including service chain requirements. Both OVS and the ACI Fabric then implement the full policy including distributed L2, L3, and security. ACI also allows customers to separate tenant polices from operation. The Tenants manage their applications while the ACI admin manages network operations and infrastructure using policy and it’s all done with automation that speeds up your OpenStack operations.
There was also strong interest in the OpFlex protocol, which Cisco announced at Interop a few weeks ago and how it opens up the ACI policy framework to a broad eco-system. We had lots of other demos showing our Open Stack integration, from a UCS, Nexus 1k, UCS Director stand-point, to round off a 360 degree view of our commitment to broad industry initiatives.
I want to shift focus now to two cool videos recorded last week, by the dynamic team of Joe Onisick and Lilian Quan from the Insieme Business Unit, at Cisco. Joe emphasizes “traffic flows within the ACI Fabric, and application of policy”, while Lilian covers the magic behind how “traffic is handled within the ACI fabric” with emphasis on re-route, bounce, ARP flooding avoidance, etc.,
Stay tuned for more videos on the ACI Fabric mode in near future. We also have a slew of whitepapers coming up that will cover the APIC/ACI Fabric innovations. Check out the recently posted APIC Policy Model whitepaper that walks you through the basics of the object oriented policy model, Spine-Leaf network architecture and its benefits, APIC policy enforcement, Unicast/Multi-cast policy enforcement, concept of end-point groups (EPG) and all related concepts that you would find extremely valuable as you consider a policy based network architecture for your Data center needs.
I will be covering more exciting news on the ACI front, as we approach Cisco Live San Francisco. Stay tuned
APIC Policy Model whitepaper http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-731310.html
OpFlex -- An Open policy protocol http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-731304.html
OpFlex -- An open source approach http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-731303.html
(ACI-OpenStack demo) http://youtu.be/fYQDvKVg-ag
(Opflex announcement) http://blogs.cisco.com/datacenter/introducing-opflex-a-new-standards-based-protocol-for-application-centric-infrastructure/
Tags: ACI, ACI with OpenStack, cisco live san francisco 2014, neutron plug-in, opflex protocol, spine-leaf architecture
The Best of Interop Awards for 2014 were announced today at 5.30 PM, at the Interop Theater. Cisco Nexus 9516 switch won the Best of Interop award in the Data Center category. Check out http://www.interop.com/lasvegas/expo/best-of-interop-awards.php for details
Cisco Nexus 9516 is winner in Data Center category
To learn more about this unique product, you can refer to the following posts.
Best of Interop Finalist 2014 – Meet Cisco Nexus 9516 the new big, bad boy of Nexus 9000 Family
Best of Interop Awards: Cisco APIC and Nexus 9516 Switch Selected as Finalists
I want to extend my congratulations to the entire team at Insieme Network Systems Business Unit and recognize their hard work in developing this award winning switch.
Cisco appreciates the recognition from the Interop judges and it’s a great complement to the recognition the Nexus 9000 is getting from its customers. Six Nexus 9000 customers deploying Nexus 9000 in their cloud and datacenter environments will be speaking at our live webcast
New Applications Are Knocking: Is your Data Center OPEN for Business?
On April 2 at 1:00PM PDT/ 4:00PM EDT to hear Cisco’s Soni Jiandani, SVP Marketing and Rebecca Jacoby, CIO along with leading technology executives from partner companies. We will also discuss today’s major technology announcement on OpFlex so join us! Register Here
Cisco and Industry Leaders Will Deliver Open, Multi-Vendor, Standards-Based Networks for Application Centric Infrastructure with OpFlex Protocol
Tags: APIC, best of interop, Cisco Nexus 9516, Data Center category, opflex protocol