Cisco Blogs


Cisco Blog > Data Center

Top 3 Ways New ACI Innovations Improve Your Data Center

Yesterday, Cisco announced a new software release for ACI. If you are looking to automate IT, or build out your cloud environment, and want to do so in an open fashion that provides a lot of flexibility – then you’ll probably be interested.

Why? The new ACI release:

  1. Makes managing and securing your cloud environment easier;
  2. Provides openness, expanding customer choice; and
  3. Delivers operational flexibility

OK, so what does this actually mean?

  1. Makes managing and securing your cloud environment easier

Three of the most popular cloud management tools include Microsoft Azure Pack, OpenStack and VMware vRealize. Earlier this year, we announced Windows Azure Pack ACI integration. With this new ACI release, we integrate ACI with OpenStack and vRealize, as well. (More details are here.) So this means that if you need to, say, provision a virtual workload in vCenter, ACI automagically orchestrates things to match computing resources and networking infrastructure.  So, you can enjoy the policy based automation and all the other benefits of ACI regardless of which of these tools you use to manage your cloud environment.

This also means OpenStack users can now create and manage their own virtual networks, extending ACI policy directly into the hypervisor with a hardware-accelerated, fully distributed OpenStack networking solution – the only one available that integrates both physical and virtual environments.

To more easily and completely secure these environments, the new release provides micro-segmentation support for VMware VDS, Microsoft Hyper-V virtual switch, and bare-metal endpoints. Essentially, this means more granular enforcement of security policies.   These can be based on numerous different criteria relevant to attributes associated with the network, e.g. IP address, or the virtual machine, e.g. VM identifier, Name, etc. There are additional capabilities that can, for example, disable communication between devices within a policy group (intra EPG, for those more familiar with ACI) – useful in thwarting lateral expansion of attacks.

  1. Provides openness, expanding customer choice

Piggybacking off some comments above, it’s worth noting that since ACI’s inception, one of its differentiators has been the ability to integrate physical servers as well as virtual machines, and to apply policy consistently across them. Well, now there’s a new kid on the block, as the industry observes an increasingly popular trend to use containers as another way of operating applications.   As part of this announcement, we are extending ACI support to include Docker containers, in addition to VM’s and bare metal servers. This is done by using Project Contiv, which is an open source project that has a Docker network plugin allowing, among other things, automatic configuration of Docker hosts to integrate with ACI. Check out details on this video and/or this white paper. Network Computing commented here, that:

“Given all the hubbub in the industry over Docker, ACI’s new Docker container support is noteworthy.”

Another way this new release is driving openness and providing more choice for customers is around L4-7 services. ACI now supports service insertion and chaining for any service device.  So, customers can leverage their existing model of deploying and operating their L4-L7 device, while automating the network connectivity. This is in addition to, not instead of, the device package model, which provides for more comprehensive ‘soup to nuts’ automation. Speaking of which, as part of this announcement, several new partners also joined the ACI Ecosystem. This video provides some insight into how some of them automate your applications.

  1. Delivers operational flexibility

The new release has a number of tools that create more flexible operating environments. A quick rundown includes the multi-site app, which enables policy-driven automation across multiple datacenters, providing enhanced application mobility and disaster recovery. In short, this means you can run ACI in 2 different data centers, and extend the policy across them. Other tools provide the ability to do configuration rollback, as well as NX-OS Style CLI. This is for the CLI junkie that wants to run the entire ACI fabric as a single switch. There are some other cool nuggets in here as well, like a heat map that provides real-time visibility into system health.

Clayton Weise, Director of Cloud Services at KeyInfo, summed it up best when he said:

“ACI is the direction we’re going to go because it gives us the best flexibility.” (Read the entire Network World story here.)

In summary, this new release adds capabilities that will help you more effectively manage and secure your cloud environment, as well as leverage the benefits of both openness and operational flexibility.

 

 

 

 

 

 

 

Tags: , , , , , , , , , , , , ,

usNIC inside Linux containers

Linux containers, as a lighter virtualization alternative to virtual machines, are gaining momentum. The High Performance Computing (HPC) community is eyeing Linux containers with interest, hoping that they can provide the isolation and configurability of Virtual Machines, but without the performance penalties.

In this article, I will show a simple example of libvirt-based container configuration in which I assign the container one of the ultra-low latency (usNIC) enabled Ethernet interfaces available in the host. This allows bare-metal performance of HPC applications, but within the confines of a Linux container.

Read More »

Tags: , , ,

Cisco ACI Policy Model spans Physical, Virtual and Container based environments

I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”.  Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.

The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.

Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.

Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance – aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.

containergraphic4

ACI and Containers

Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.

The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.

containergraphic3

You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.

As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.

With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.

For more information refer:

http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-732697.html

http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/dc-partner-red-hat/linux-containers-white-paper-cisco-red-hat.pdf

Tags: , , , , , , ,

New Technologies for the Delivery of Services

Reduction in the complexity of deploying and managing services, accelerating new service introduction, and reducing capital/operational expenditure overhead are key priorities for network operators today. These priorities are in part driven by the need to generate more revenue per user. But competitive pressures and increasing demand from consumers are also pushing them to experiment with new and innovative services. These services may require unique capabilities that are specific to a given network operator and in addition may require the ability to tailor service characteristics on a per-consumer basis. This evolved service delivery paradigm mandates that the network operator have the ability to integrate policy enforcement alongside the deployment of services, applications, and content, while maintaining optimal use of available network capacity and resources. Read More »

Tags: , , , , , , , , , , , , , , , , , ,