Yesterday, Cisco announced a new software release for ACI. If you are looking to automate IT, or build out your cloud environment, and want to do so in an open fashion that provides a lot of flexibility – then you’ll probably be interested.
Why? The new ACI release:
- Makes managing and securing your cloud environment easier;
- Provides openness, expanding customer choice; and
- Delivers operational flexibility
OK, so what does this actually mean?
- Makes managing and securing your cloud environment easier
Three of the most popular cloud management tools include Microsoft Azure Pack, OpenStack and VMware vRealize. Earlier this year, we announced Windows Azure Pack ACI integration. With this new ACI release, we integrate ACI with OpenStack and vRealize, as well. (More details are here.) So this means that if you need to, say, provision a virtual workload in vCenter, ACI automagically orchestrates things to match computing resources and networking infrastructure. So, you can enjoy the policy based automation and all the other benefits of ACI regardless of which of these tools you use to manage your cloud environment.
This also means OpenStack users can now create and manage their own virtual networks, extending ACI policy directly into the hypervisor with a hardware-accelerated, fully distributed OpenStack networking solution – the only one available that integrates both physical and virtual environments.
To more easily and completely secure these environments, the new release provides micro-segmentation support for VMware VDS, Microsoft Hyper-V virtual switch, and bare-metal endpoints. Essentially, this means more granular enforcement of security policies. These can be based on numerous different criteria relevant to attributes associated with the network, e.g. IP address, or the virtual machine, e.g. VM identifier, Name, etc. There are additional capabilities that can, for example, disable communication between devices within a policy group (intra EPG, for those more familiar with ACI) – useful in thwarting lateral expansion of attacks.
- Provides openness, expanding customer choice
Piggybacking off some comments above, it’s worth noting that since ACI’s inception, one of its differentiators has been the ability to integrate physical servers as well as virtual machines, and to apply policy consistently across them. Well, now there’s a new kid on the block, as the industry observes an increasingly popular trend to use containers as another way of operating applications. As part of this announcement, we are extending ACI support to include Docker containers, in addition to VM’s and bare metal servers. This is done by using Project Contiv, which is an open source project that has a Docker network plugin allowing, among other things, automatic configuration of Docker hosts to integrate with ACI. Check out details on this video and/or this white paper. Network Computing commented here, that:
“Given all the hubbub in the industry over Docker, ACI’s new Docker container support is noteworthy.”
Another way this new release is driving openness and providing more choice for customers is around L4-7 services. ACI now supports service insertion and chaining for any service device. So, customers can leverage their existing model of deploying and operating their L4-L7 device, while automating the network connectivity. This is in addition to, not instead of, the device package model, which provides for more comprehensive ‘soup to nuts’ automation. Speaking of which, as part of this announcement, several new partners also joined the ACI Ecosystem. This video provides some insight into how some of them automate your applications.
- Delivers operational flexibility
The new release has a number of tools that create more flexible operating environments. A quick rundown includes the multi-site app, which enables policy-driven automation across multiple datacenters, providing enhanced application mobility and disaster recovery. In short, this means you can run ACI in 2 different data centers, and extend the policy across them. Other tools provide the ability to do configuration rollback, as well as NX-OS Style CLI. This is for the CLI junkie that wants to run the entire ACI fabric as a single switch. There are some other cool nuggets in here as well, like a heat map that provides real-time visibility into system health.
Clayton Weise, Director of Cloud Services at KeyInfo, summed it up best when he said:
“ACI is the direction we’re going to go because it gives us the best flexibility.” (Read the entire Network World story here.)
In summary, this new release adds capabilities that will help you more effectively manage and secure your cloud environment, as well as leverage the benefits of both openness and operational flexibility.
Tags: #CiscoACI, #ciscodatacenter, ACI, API, cloud, Cloud Computing, containers, data center, docker, L4-7 Services, Linux Containers, Open, SDN, security
One of the biggest disruptions in the IT world is upon us. 10 years ago it was server virtualization, more recently the adoption of cloud – both private and public. One could argue that cloud adoption is still ongoing. But I think a more fundamental disruption is happening with the way applications are going to be built, deployed and operated in the future.
By now, almost everyone is familiar with the industry buzzwords such as containers/Docker, microservices and DevOps, etc. We are in some ways skeptical of these buzzwords as we have seen many fizzle over longer term. But, these technologies/architectures enable the enterprise to build cloud-native applications and run them at scale. They will help organizations make the most of public and private cloud deployment and will result in cloud adoption increasing exponentially.
Many still believe that the primary benefits of containers come from the technology optimizations that they bring when compared to Virtual Machines (VMs). For instance, the significant scale increase (more than 10x per host density), smaller footprint (memory, CPU, hard disk) or the faster creation and destroy cycle (milliseconds vs. minutes). But while these things are indeed very relevant, the real benefits are broader than just infrastructure advantages. The two main benefits are, first how the container technology is ideally suited to enable newer ways to develop applications (continuous integration and development) and secondly how you can scale applications (through microservices architecture) and port them between different infrastructure environments (public or private).
Microservices architectures are transforming the way applications are architected and built. I can remember the days when I could never wait for our IT to role out an update to my favorite application because the timelines were always in multiple months if not years. Hopefully, those days are going to be a thing of the past with the current ability to construct applications in a more easily developable/updatable/scalable microservices framework.
Although there are numerous projects and tools available in the market place in order for IT to set up the infrastructure, there is still need for admins to be able to specify the infrastructure operational policies around network, storage, security, compute for the containerized applications in an automated way and have those policies be implemented across infrastructure consistently. If no such mechanism exists, we could have resource contention between production and development applications or security violations between different applications/tenants and overall unpredictable application performance. We believe there has to be better way for containerized applications to run in a shared infrastructure.
Introducing Project Contiv
Project Contiv is an open source project defining infrastructure operational policies for container-based application deployment. Application intent, such as docker compose, allows for declarative specification for an application’s microsevices composition. Project Contiv compliments application intent with the ability to specifyinfrastructure operational policies for network, storage and compute elements of the physical and virtual infrastructure by directly mapping the application intent, with the infrastructure policy required.
Project Contiv Architecture
So what are some of the infrastructure operational policies that most IT organization expects to specify for containerized applications?
- Security policies for applications for inbound/outbound as well as within application tiers
- Network services policies- integration of L4-L7 services (Load balancers, firewall, encryption, etc.)
- Analytics and diagnostics policies
- Physical infrastructure policies around bandwidth limit/guarantee per container, latency requirements, etc.
- IP allocation management (IPAM) policies
- Storage policies around persistence storage, volume allocation, snapshotting etc.
- Compute policies around performance requirements/off-load (to NIC or Network) and SLA etc.
- Corporate and government compliance policies
So with Project Contiv, we hope to help you optimize and achieve saner shared infrastructure for your various containerized applications.
We believe the best way to go about achieving this objective is to build a community of similar minded people to join the Project Contiv and contribute at http://www.contiv.io to enable enterprise grade applications to be adopted more rapidly.
Currently there are two projects that enable networking and storage for docker based container deployment.
Contiv Networking is a container network plugin to provide infrastructure and security policies for a multi-tenant microservices deployment, while providing integration to physical network for communicating with non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards. For more information, visit https://github.com/contiv/netplugin
Contiv Volume plugin is a docker volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption using ceph underneath. For more information, visit https://github.com/contiv/volplugin
We got a very encouraging start to our introduction talk by Vipin Jain (@jainvipin_), core developer of Project Contiv at Docker Meetup in Palo Alto last month with 250 registered attendees (with about 100 on waitlist). If you are visiting DockerCon Europe 2015 at Barcelona next week, make sure you visit Project Contiv booth for a demo and connect with us in person. We are looking forward to your contributions in the container community and Project Contiv github.
Project Contiv at Docker Palo Alto Meetup
I also encourage you to visit Cisco’s open source project Mantl around microservices infrastructure. Project Contiv will soon be part of the Project Mantl to bring better infrastructure for your microservices applications.
Tags: containers, docker, Mantl, shipped
During the OpenStack Summit last week, we released Mantl 0.4. In this blog I would like to go into more details about the release. But first I’d like to start by explaining what Mantl is – and what it is not.
System Integration as Open Source
Mantl is a layered stack that takes care of system integration. It does this by using tools at different layers – Terraform to provision Virtual Machines and Apache Mesos & Kubernetes for cluster management. Higher level services are taken care of by tools, such as Consul for service discovery, or by custom Apache Mesos frameworks, which are currently used for processing data.
You could say that Mantl create the “glue” to enable hybrid cloud. This is too dry an explanation for us. The truth is that Mantl has three design goals: Build; Deploy; Run
- Firstly, it aims to shorten the development cycle. Most programmers recollect feelings of joy when they first coded. However, as web-development rose in conjunction with the monolith, coding was as much, if not more, about configuration management as it was application development. The extension of the feedback cycle, as well as not been much fun, seriously stunted productivity.
Currently it’s the same for cloud applications. Developers spend excessive amounts of time provision machines, opening ports and managing clusters when they could be developing their applications. One of the tenants of Mantl is it creates a ‘place to innovate’. It does this by making the cloud invisible and thus allowing developers to do what they do best: build innovative applications and get them into user hands as quickly as possible.
- Secondly, Mantl aims to gently coach developers, helping them to write cloud native applications. Many developers, understandably so, design their first cloud applications as they would have their old, three tier systems. With a gentle opinion, Mantl nudges developers towards containerized services and multi-language systems while at the same time creating a bridge between the traditional and the cloud native.
- Thirdly, Mantl aims to make interaction with the cloud as simple as possible. Famously, Joel Spolsky said that all abstractions leak. What this means is that you can never hide the underlying abstraction: virtual machines are bound by the hardware they run on; compilers are bound by underlying machine architectures. It’s the same for cloud: you cannot totally abstract the platform away. However, if you must interact with it, you should do at the right level of abstraction. Mantl provides a number of tools that make this easier. It relies on Docker containers and Terraform, for example, but also provides custom tooling, such as MiniMesos.
In summary, Mantl coaches, shortens the development life cycle and provides abstractions at the appropriate levels. In addition to this, it provides data-tooling.
Let’s now look at some of the innovations from release 0.4.
Mantl 0.4 includes a new WebUI that connects to the various applications (Mesos / Marathon / Chronos / Consul). For example, users can now access Mesos agent logs through an authenticated UI.
Backed by Consul service discovery, the new UI automatically connects to the correct Mesos masters and agents.
We’re very excited to announce support for the first release of Mantl-API.
Mantl API provides a new way for you to manage Mantl clusters. With the first release, you can easily install pre-built applications and Mesos frameworks. With a single API call, you can now spin up Cassandra on your Mantl cluster.
We think Mantl-API will be useful for anyone who is currently running Mesos.
Support for deploying GlusterFS as a shared filesystem has been added.
DNS provider support
Support for DNS providers. We’ve added example code to configure DNS registration of Mantl nodes in DNSimple. Thanks to contributors, we will be adding support for other DNS providers like Route 53 and Google Cloud. We’ll make these more configurable when terraform supports conditional logic.
Calico IP per container networking (tech preview).
Calico is a new virtual network solution that enables the IP per container functionality. Calico connects Docker containers through IP no matter which worker node they are on.
Data Tooling Built In
The ELK stack is built into Mantl as Apache Mesos frameworks. This means that developers can use Mantl’s Terraform modules to provision a cluster, setup the system, and immediately start building data-driven applications.
On its own, this functionality is powerful. However, because Mantl uses Apache Mesos frameworks for its data tooling, it can (and does) take advantage of Mesos’ scheduling and hardware utilization features. In addition to this, the frameworks provide extra functionality.
Let’s look at three features of the ElasticSearch framework. Firstly, the framework allows the scaling of the cluster via a GUI – it thus provide the right level of abstraction for developers to interact with the cluster. Secondly, it provides a visualization of the cluster, including where the PRIMARY and REPLICA shards are located. Thirdly, through the GUI, developers can search the cluster, which is handy for testing and debugging.
Please note, although these features are in progress, they are currently on the experimental branch.
Image 1 – ElasticSearch Framework GUI with the works of Shakespeare on a three machine cluster.
The Mantl Developer Tools – MiniMesos
One of the problems with Apache Mesos is that it’s hard to set up. In his O’Reilly article, “Swarm v. Fleet v. Kubernetes v. Mesos”, Adrian Mouat says that, ‘Mesos is a low-level, battle-hardened scheduler that supports several frameworks for container orchestration including Marathon, Kubernetes, and Swarm’. However, he goes onto say that for small clusters it may be an overly ‘overly complex solution’.
Mantl uses Mesos because its battle hardened. But since one of Mantl’s goals is to make interaction with complex tools as simple as possible, the teams building Mantl created MiniMesos.
MiniMesos provides an abstraction layer over Apache Mesos. Minimesos allows developers to run, test and even share their clusters. Since Minimesos can bring a cluster up in milliseconds and lets developers test their code before checking in, it radically shortens the developer lifecycle. Importantly, Minimesos can be used from the command line or via its API, thus making automated system testing easy.
Minimesos now has its own Twitter account and website. It is one (of many) innovations to come out of the Mantl program and has captured the imagination of the community. Pini Reznik, CTO of Container Solutions, who are part of the team working in Mantl, says that ‘Minimesos is to Apache Mesos what Docker is to LXC’.
Image 2 – MiniMesos Command Line Interface as it is implemented in Mantl 0.4. More commands to come, including ‘install’ for quickly adding frameworks.
Check out the video on MiniMesos.
There are many uses cases for Mantl. One of the most interesting patterns that is emerging is around IoT. At DockerCon, in November, we hope to reveal the Wheel of Fortune application. The Wheel of Fortune connects a physical wheel to a REST endpoint. The endpoint is part of an application that scales automatically and displays the data via a web-application.
At first glance the Wheel of Fortune may seem like a bit of fun. However, collecting data, big or otherwise, from the IoT for storage and analysis is a key aim of Mantl. Because Mantl abstracts the underlying infrastructure away or makes it invisible, developers can get busy building and deploying their big data applications without worrying about system integration.
Another interesting use case is hybrid devops. Hybrid devops is the ability for enterprises to develop their applications leveraging Cisco Shipped (ciscoshipped.io) the way they always have. Then leverage Mantl to deploy their application on any external cloud environment supported by Mantl (AWE, GCE, Digital Ocean, Rackspace, Cisco Cloud) in a CI/CD framework that enables internal and external services to be leveraged by the application.
We are making Mantl more modular, so that you can select the scheduling, logging and networking components you want to deploy.
The team is also committed to automated testing, and we’ll be testing Mantl against multiple cloud providers daily.
Features on the roadmap include:
Better haproxy support
Improved docker storage leveraging Cisco Contiv.
Full integration of Hashicorp Vault
Modular networking leveraging Cisco Contiv
Simplified API management
Application Policy Intent leveraging Cisco Contiv
New deployment and management tools
Modern enterprises face three often competing tensions. Firstly, they have to learn how to build cloud native applications. This involves much more than recreating monoliths in the cloud. It involves changes in process but also in structure. As enterprises encompass small and medium sized companies in their supply chains, they have to have a structure that supports language agnostic microservices.
Secondly, the challenge of big data is calling all companies. Enterprises not only need to tap into the power of data scientists and developers but they have to actively work around organizational scar tissue. It is impossible to work with large amounts of data and to test new algorithms against production data whilst carrying decades worth of old processes and procedures around. The new enterprise can be agile and take advantage of big data. What it can’t be is bureaucratic and take advantage of big data – these two concepts simply cannot coexist.
Finally, all enterprises must deal with governance. This includes security, operations and a shift towards DevOps or NoOps.
Mantl helps enterprises resolve the tension between these three challenges. Mantl enables repeatable and simple deployment procedures through its use of programmable infrastructure tools, like Docker and Terraform. Mantl promotes the microservice architecture and by default supports systems built in multiple languages by multiple teams. This means that enterprises can take advantage of an extended, horizontally aligned, supply chain. Finally, Mantl is both IoT and Big Data ready and friendly. Through its use of abstraction, programmers and data scientists can focus on what they do best whilst leaving system integration the Mantl.
● Mantl’s website, http://mantl.io/.
● MiniMesos’ website, http://minimesos.org/.
● Cisco Shipped website, http://ciscoshipped.io
● Cisco Contiv website, http://contiv.io
● ‘The Law of Leaky Abstractions’, Joel Spolsky, http://www.joelonsoftware.com/articles/LeakyAbstractions.html.
● ‘Swarm v. Fleet v. Kubernetes v. Mesos’, Adrian Mouat, http://radar.oreilly.com/2015/10/swarm-v-fleet-v-kubernetes-v-mesos.html.
● ‘Mini-Mesos: What’s a Nice XPer Doing in a Company Like This?’, Jamie Dobson, http://thenewstack.io/mini-mesos/.
Tags: app developer, Application policy, containers, Microservices, Network Containers
We recently sat down with IDC analyst, Jed Scaramella, to talk about an interesting and accelerating trend in data center technology: composable infrastructure. With UCS M-Series servers, Cisco has taken an important step forward in this space. To help frame things up, we asked Jed for his take on the market drivers and customer needs fueling innovation. We’ve broken the conversation with Jed into a series and hope to shed some light on how this will re-shape computing architecture and the opportunities for IT.
Read More »
Tags: Composable Infrastructure, containers, data center, devops, orchestration, Servers, UCS, UCS m-series
In the next generation application infrastructure, users need a better experience and a reduction in deployment complexity for customers looking to embrace containers, PaaS and rapid deployment technologies. IOT and other trends will continue to exponentially drive more traffic, making this need all the more pressing.
To solve this issue, Metaswitch and Cisco are partnering on Project Calico which is focused heavily on customer needs in the areas of Scale, Performance, Security and Developer Experience; all of which need addressing to make containers a first rate citizen of today’s networks and compute infrastructure.
Building upon a standards-first, IP-per-container topology, Project Calico sets to improve the ‘status quo’ of container networking and to provide proven integration solutions for existing cloud, service provider and enterprise infrastructures.
We strongly believe container networking will provide unification of next generation application infrastructure, resulting in better user experience and a reduction in deployment complexity for customers looking to embrace containers, PaaS and rapid deployment technologies. This is critical for simplifying the application performance requirements without adding complexity.
The ability to enable a complete networking strategy, from end user, through datacenter, container and into the application; including policy, QoS, access and security; without sacrificing developer time or increasing complexity is increasingly necessary to scale to the IOT workloads of tomorrow and support hybrid-devops development trends.
Users need very performant networks where policy can be distributed to thousands of containers while maintaining trust; or an environment where hundreds of containers are created every second. Manual intervention is no longer an option so new tools and frameworks are needed, solutions which Cisco and Project Calico are collaborating to provide.
We announced this partnership today at the #MesosCON keynote in Seattle, WA, along with a number of other Cisco partnerships around the Mesos community and an official brand for our open source, mesos-based Microservices Infrastructure solution; http://mantl.io
Following the keynote, we took some views from project members;
Matt Johnson, Innovation Architect within Cisco’s Cloud CTO team noted: “After looking at the gaps in the current [container networking] landscape, our internal solutions came very close to Metaswitch’s existing work with Project Calico. Instead of splitting the community, we feel it makes sense to work together to innovate at a quicker pace.
Cisco believes container networking should have a simple, standards-based integration story with existing network and compute topologies, supporting enterprise, service provider and cloud into the future. We feel that Project Calico’s ethos mirrors this strategy and look forward to working more closely with the team.”
Andy Randall, general manager of Metaswitch’s networking business unit and head of Project Calico, added: “Project Calico is rapidly establishing itself as the leading virtual networking solution for at-scale, production container networks. We are thrilled that Cisco has decided to join forces with this effort, accelerating the project’s velocity and helping to address the devops community’s urgent need for a simplified, standardized networking solution across multiple cloud and datacenter infrastructures.”
Check back for more news as the partnership progresses. We will be showcasing the results in our open source microservices project, mantl.
Tags: Application Centric Networking, Calico, Cisco cloud, containers, devops, Mantl, Microservices, Network Containers, shipped