Cisco Blogs

Cisco Blog > Cloud

Microservices Infrastructure – Mantl – Release 0.4

During the OpenStack Summit last week, we released Mantl 0.4. In this blog I would like to go into more details about the release. But first I’d like to start by explaining what Mantl is – and what it is not.

System Integration as Open Source

Mantl is a layered stack that takes care of system integration. It does this by using tools at different layers – Terraform to provision Virtual Machines and Apache Mesos & Kubernetes for cluster management. Higher level services are taken care of by tools, such as Consul for service discovery, or by custom Apache Mesos frameworks, which are currently used for processing data.

You could say that Mantl create the “glue” to enable hybrid cloud. This is too dry an explanation for us. The truth is that Mantl has three design goals: Build; Deploy; Run

  • Firstly, it aims to shorten the development cycle. Most programmers recollect feelings of joy when they first coded. However, as web-development rose in conjunction with the monolith, coding was as much, if not more, about configuration management as it was application development. The extension of the feedback cycle, as well as not been much fun, seriously stunted productivity.

Currently it’s the same for cloud applications. Developers spend excessive amounts of time provision machines, opening ports and managing clusters when they could be developing their applications. One of the tenants of Mantl is it creates a ‘place to innovate’. It does this by making the cloud invisible and thus allowing developers to do what they do best: build innovative applications and get them into user hands as quickly as possible.

  • Secondly, Mantl aims to gently coach developers, helping them to write cloud native applications. Many developers, understandably so, design their first cloud applications as they would have their old, three tier systems. With a gentle opinion, Mantl nudges developers towards containerized services and multi-language systems while at the same time creating a bridge between the traditional and the cloud native.
  • Thirdly, Mantl aims to make interaction with the cloud as simple as possible. Famously, Joel Spolsky said that all abstractions leak. What this means is that you can never hide the underlying abstraction: virtual machines are bound by the hardware they run on; compilers are bound by underlying machine architectures. It’s the same for cloud: you cannot totally abstract the platform away. However, if you must interact with it, you should do at the right level of abstraction. Mantl provides a number of tools that make this easier. It relies on Docker containers and Terraform, for example, but also provides custom tooling, such as MiniMesos.

In summary, Mantl coaches, shortens the development life cycle and provides abstractions at the appropriate levels. In addition to this, it provides data-tooling.

Let’s now look at some of the innovations from release 0.4.


Mantl 0.4 includes a new WebUI that connects to the various applications (Mesos / Marathon / Chronos / Consul). For example, users can now access Mesos agent logs through an authenticated UI.

Backed by Consul service discovery, the new UI automatically connects to the correct Mesos masters and agents.


We’re very excited to announce support for the first release of Mantl-API.

Mantl API provides a new way for you to manage Mantl clusters. With the first release, you can easily install pre-built applications and Mesos frameworks. With a single API call, you can now spin up Cassandra on your Mantl cluster.

We think Mantl-API will be useful for anyone who is currently running Mesos.


Support for deploying GlusterFS as a shared filesystem has been added.

DNS provider support

Support for DNS providers. We’ve added example code to configure DNS registration of Mantl nodes in DNSimple. Thanks to contributors, we will be adding support for other DNS providers like Route 53 and Google Cloud. We’ll make these more configurable when terraform supports conditional logic.

Calico IP per container networking (tech preview).

Calico is a new virtual network solution that enables the IP per container functionality. Calico connects Docker containers through IP no matter which worker node they are on.

Data Tooling Built In

The ELK stack is built into Mantl as Apache Mesos frameworks. This means that developers can use Mantl’s Terraform modules to provision a cluster, setup the system, and immediately start building data-driven applications.

On its own, this functionality is powerful. However, because Mantl uses Apache Mesos frameworks for its data tooling, it can (and does) take advantage of Mesos’ scheduling and hardware utilization features. In addition to this, the frameworks provide extra functionality.

Let’s look at three features of the ElasticSearch framework. Firstly, the framework allows the scaling of the cluster via a GUI – it thus provide the right level of abstraction for developers to interact with the cluster. Secondly, it provides a visualization of the cluster, including where the PRIMARY and REPLICA shards are located. Thirdly, through the GUI, developers can search the cluster, which is handy for testing and debugging.

Please note, although these features are in progress, they are currently on the experimental branch.


Image 1 – ElasticSearch Framework GUI with the works of Shakespeare on a three machine cluster.

The Mantl Developer Tools – MiniMesos
One of the problems with Apache Mesos is that it’s hard to set up. In his O’Reilly article, “Swarm v. Fleet v. Kubernetes v. Mesos”, Adrian Mouat says that, ‘Mesos is a low-level, battle-hardened scheduler that supports several frameworks for container orchestration including Marathon, Kubernetes, and Swarm’. However, he goes onto say that for small clusters it may be an overly ‘overly complex solution’.

Mantl uses Mesos because its battle hardened. But since one of Mantl’s goals is to make interaction with complex tools as simple as possible, the teams building Mantl created MiniMesos.

MiniMesos provides an abstraction layer over Apache Mesos. Minimesos allows developers to run, test and even share their clusters. Since Minimesos can bring a cluster up in milliseconds and lets developers test their code before checking in, it radically shortens the developer lifecycle. Importantly, Minimesos can be used from the command line or via its API, thus making automated system testing easy.

Minimesos now has its own Twitter account and website. It is one (of many) innovations to come out of the Mantl program and has captured the imagination of the community. Pini Reznik, CTO of Container Solutions, who are part of the team working in Mantl, says that ‘Minimesos is to Apache Mesos what Docker is to LXC’.


Image 2 – MiniMesos Command Line Interface as it is implemented in Mantl 0.4. More commands to come, including ‘install’ for quickly adding frameworks.

Check out the video on MiniMesos.

Use Cases
There are many uses cases for Mantl. One of the most interesting patterns that is emerging is around IoT. At DockerCon, in November, we hope to reveal the Wheel of Fortune application. The Wheel of Fortune connects a physical wheel to a REST endpoint. The endpoint is part of an application that scales automatically and displays the data via a web-application.

At first glance the Wheel of Fortune may seem like a bit of fun. However, collecting data, big or otherwise, from the IoT for storage and analysis is a key aim of Mantl. Because Mantl abstracts the underlying infrastructure away or makes it invisible, developers can get busy building and deploying their big data applications without worrying about system integration.

Another interesting use case is hybrid devops. Hybrid devops is the ability for enterprises to develop their applications leveraging Cisco Shipped ( the way they always have. Then leverage Mantl to deploy their application on any external cloud environment supported by Mantl (AWE, GCE, Digital Ocean, Rackspace, Cisco Cloud) in a CI/CD framework that enables internal and external services to be leveraged by the application.

Whats next

We are making Mantl more modular, so that you can select the scheduling, logging and networking components you want to deploy.

The team is also committed to automated testing, and we’ll be testing Mantl against multiple cloud providers daily.

Features on the roadmap include:

Better haproxy support
Improved docker storage leveraging Cisco Contiv.
Full integration of Hashicorp Vault
Kubernetes/OpenShift support
Modular networking leveraging Cisco Contiv
Simplified API management
Application Policy Intent leveraging Cisco Contiv
New deployment and management tools

Modern enterprises face three often competing tensions. Firstly, they have to learn how to build cloud native applications. This involves much more than recreating monoliths in the cloud. It involves changes in process but also in structure. As enterprises encompass small and medium sized companies in their supply chains, they have to have a structure that supports language agnostic microservices.

Secondly, the challenge of big data is calling all companies. Enterprises not only need to tap into the power of data scientists and developers but they have to actively work around organizational scar tissue. It is impossible to work with large amounts of data and to test new algorithms against production data whilst carrying decades worth of old processes and procedures around. The new enterprise can be agile and take advantage of big data. What it can’t be is bureaucratic and take advantage of big data – these two concepts simply cannot coexist.

Finally, all enterprises must deal with governance. This includes security, operations and a shift towards DevOps or NoOps.

Mantl helps enterprises resolve the tension between these three challenges. Mantl enables repeatable and simple deployment procedures through its use of programmable infrastructure tools, like Docker and Terraform. Mantl promotes the microservice architecture and by default supports systems built in multiple languages by multiple teams. This means that enterprises can take advantage of an extended, horizontally aligned, supply chain. Finally, Mantl is both IoT and Big Data ready and friendly. Through its use of abstraction, programmers and data scientists can focus on what they do best whilst leaving system integration the Mantl.

● Mantl’s website,
● MiniMesos’ website,
● Cisco Shipped website,
● Cisco Contiv website,
● ‘The Law of Leaky Abstractions’, Joel Spolsky,
● ‘Swarm v. Fleet v. Kubernetes v. Mesos’, Adrian Mouat,
● ‘Mini-Mesos: What’s a Nice XPer Doing in a Company Like This?’, Jamie Dobson,

Tags: , , , ,

Red Hat and Cisco bring Application Policy to OpenStack environments

On January 13, 2015, Cisco will celebrate a year of industry adoption of Application Centric Infrastructure (ACI), a ground breaking SDN architecture. It will include a public webcast with ACI customers and ecosystem partners describing a range of new solutions that dramatically simplify data center and cloud deployments . One of these inaugural partners was Red Hat, the leading provider of open source solutions for enterprise IT . Since the ACI launch, Cisco and Red Hat have been working on extending the application policy model, at the heart of Application Centric Infrastructure, to OpenStack. Here is a preview of the Red Hat solution.

Cloud deployments of new mobile, social, and big data applications need a dynamic infrastructure to support higher demand peaks, more distributed users, varying performance needs, 24×7 global usage, and changing security vulnerabilities. These applications need a mix of virtualized and dedicated “bare-metal” resources, to run economically at scale with performance and availability.

To meet these needs, Cisco, Red Hat and other companies, have jointly developed Group Based Policy – a common open policy language that expresses the intent of business and application teams separately from the language of the infrastructure. Group Based Policy offers continuous policy governance while applications are deployed, scaled, recovered and managed for threats. It is ideal for rapidly deploying elastic, secure applications through OpenStack such as CRM, eCommerce, big data, financial reporting, and corporate e-mail.

IT organizations can get several benefits:

o   Dramatically accelerate deployment of business applications and services through OpenStack.

o   Maintain enforcement of business and application policies during frequent changes to scale, tenants, and the infrastructure.

o   Simplify DevOps Release Automation – moving application changes to production.

o   Ideal for hybrid cloud – Preserve user-intent and business policies across different infrastructures.

o   Prevent shadow IT – empowers internal IT to match the agility of the public cloud while complying with corporate controls .

Network administrators can get additional benefits when Group Based Policy is combined with the full capabilities of Cisco Application Centric Infrastructure, including seamless management of heterogeneous infrastructure, policy based network automation, real-time troubleshooting and performance optimization.


Group Based Policy (GBP) is implemented through a new APIC Group Based Policy plug-in for OpenStack Neutron, the networking service. Since networking connects all compute and storage end points in the data center, it is possible to define groups of endpoints through Neutron that share the same application requirements, regardless of how they are connected.  In addition, GBP:

  • Captures dependencies between applications, tiers and infrastructure so that respective teams can evolve underlying capabilities independently.
  • Works with multiple SDN controllers and extensible to multi-hypervisor infrastructures.
  • Brings application policy-based provisioning to existing networking plug-ins.

Group Based Policy will be available and supported in the upcoming release of Red Hat Enterprise Linux OpenStack Platform 6. Learn more about Group Based Policy here. And register for Cisco’s webcast on January 13th.





Tags: , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 2 – The OpFlex Protocol

[Note: This is the second of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 1 | Part 3 | Part 4]

Following on from the first part of our series, this blog post takes a closer look at some of these architectural components of Cisco ACI and the VMware NSX software overlay solution to quantify the advantages of Cisco’s application-centric policies and demonstrate how the architecture supports greater scale and more robust IT automation.

As called for in the requirements listed in the previous section, Cisco ACI is an open architecture that includes the policy controller and policy repository (Cisco APIC), infrastructure nodes (network devices, virtual switches, network services, etc.) under Cisco APIC control, and a protocol communication between Cisco APIC and the infrastructure. For Cisco ACI, that protocol is OpFlex.

OpFlex was designed with the Cisco ACI policy model and cloud automation objectives in mind, including important features that other SDN protocols could not deliver. OpFlex supports the Cisco ACI approach of separating the application policy from the network and infrastructure, but not the control plane itself. This approach provides the desired centralization of policy management, allowing automation of the entire infrastructure without limiting scalability through a centralized control point or creating a single point of catastrophic failure. Through Cisco ACI and OpFlex, the control engines are distributed, essentially staying with the infrastructure nodes that enforce the policies.

Read More »

Tags: , , , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 1 – Enabling Automation

[Note: This is the first of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 2 | Part 3 | Part 4]

IT departments and lines of business are looking at cloud automation tools and software-defined networking (SDN) architectures to accelerate application delivery, reduce operating costs, and increase business agility. The success of an IT or cloud automation solution depends largely on the business policies that can be carried out by the infrastructure through the SDN architecture.

Through a detailed comparison of critical architectural components, this blog series shows how the Cisco Application Centric Infrastructure (ACI) architecture supports a more business-relevant application policy language, greater scalability through a distributed enforcement system rather than centralized control, and greater network visibility than alternative software overlay solutions or traditional SDN designs.

Historically, IT departments have sought out greater automation as device proliferation has accelerated to overcome the challenges of applying manual processes for critical tasks. About 20 years ago the automation of desktop and PC management was an imperative, and about 10 years ago server automation became important as applications migrated to larger numbers of modular x86 and RISC-based systems. Today, with the consolidation of data centers, IT must address not only application and data proliferation, but also the emergence of large scale application virtualization and cloud deployments, requiring IT to focus on cloud and network automation.

The emergence of SDN promised a new era of centrally managed, software-based automation tools that could accelerate network management, optimization, and remediation. Gartner has defined SDN as “a new approach to designing, building and operating networks that focuses on delivering business agility while lowering capital and operational costs.” (Source: “Ending the Confusion About Software-Defined Networking: A Taxonomy”, Gartner, March 2013)

Furthermore, Gartner, in an early 2014 report (“Mainstream Organizations Should Prepare for SDN Now”, Gartner, March 2014), notes that “SDN is a radical new way of networking and requires senior infrastructure leaders to rethink traditional networking practices and paradigms.” In this same report, Gartner makes an initial comparison of mainstream SDN solutions that are emerging, including VMware NSX, and Cisco ACI. There has been some discussion whether Cisco ACI is an SDN solution or something more, but most agree that, in a broad sense, the IT automation objectives of SDN and Cisco ACI are basically the same, and some of the baseline architectural features, including a central policy controller, programmable devices, and use of overlay networks, lead to a useful comparison.

This blog series focuses on the way that Cisco ACI expands traditional SDN methodology with a new application-centric policy model. It specifically compares critical protocols and components in Cisco ACI with VMware NSX to show the advantages of Cisco ACI over software overlay networks and the advantages of the ACI application policy model over what has been offered by prior SDN solutions. It also discusses what the Cisco solution means for customers, the industry, and the larger SDN community.

Read More »

Tags: , , , , , , ,

Redefining the Power of IT with Application Centric Infrastructure

After countless brainstorming sessions, code reviews, lab trials, scores of NDAs and nearly two years of intense speculation from media, analysts and the internet community – it is finally here! Today, Cisco is pulling back the curtains to reveal details of the vision of Application Centric Infrastructure (ACI) announced in June 2013. With shipping products as part of the announcement today, Cisco is also taking the first steps in making this vision a concrete reality. In the process, Insieme networks also returns to become a wholly owned subsidiary of Cisco.

For those tuning into the press conference and webcast today , you will see John Chambers, Rob Lloyd and Insieme executives get into the specifics of ACI, with the event being hosted out of the historical Waldorf Astoria in New York.  You will also see Cisco’s partners and customers share both the stage as well as a common vision.

So, after months of silence, there will be quite a bit of information sharing, perhaps Information overload even. This is an announcement with innovation at multiple levels, and even for the tech savvy it will take time to fully understand and appreciate the architecture and the benefits it brings.

I wanted to share a few key concepts, innovations, and highlights of the announcement today. We will delve into additional details and dissect these pieces over the next few weeks on this blogging platform as well the public website, which will host a lot of the structured content.

1. The concept of Application Centric Infrastructure

We put together a short video to distill the concepts of ACI. It encompasses a lot of what existing networks today, as well as emerging SDN concepts (regardless of what the definition of SDN is), and goes quite beyond what anyone else is offering out there today. You will see some critical differentiators here:

  • De-coupling of application and policy from IP infrastructure
  • Ability to define application network profiles and apply them
  • Integration of physical and  virtual infrastructure elements with end-to-end visibility
  • Openness at a all levels
  • Scale, with security

2. Application Policy Infrastructure Controller (APIC)

The Application Policy Infrastructure Controller (APIC) is a new appliance that will be the heart of the ACI fabric.  While the actual product will ship around Q2 of next calendar year. An APIC simulator will also be made available on a controlled basis for customers and partners to get familiar and additional information will continue to be made available. Unlike most software-only controllers in the market today that have little ability to exploit the capabilities of hardware, APIC provides a holistic system level view and an ability to tap into the capabilities of the underlying infrastructure. While it will initially be paired with the Nexus 9000, the APIC will be expanded to support other parts of the portfolio as well as other infrastructure building blocks.

The APIC utilizes a centralized policy-model with an application network profile and open architecture that allows for the application needs to be defined and mapped to infrastructure, to make it application-aware.

3. Nexus 9000 – Expanding the Nexus switching family

Cisco Nexus 9000 Switch FamilyWe’re expanding the highly successful Nexus family with the next “big bad boy”  – the Nexus 9000.  This will initially come in two models – the Nexus 9500 and the Nexus 9300, with the former shipping now. It has a variety of innovations for all of the “5 Ps” – (i) an extremely attractive Price point , optimized for 1G to 1/10G in the access, and for 10G to 40G migration in the aggregation layer. In addition (ii) It brings in Industry leading Performance with 1.92Tbps per line card and is 100G ready. (iii) Has significantly higher non-blocking Port-density (iv) Flexible programmability with JSON/XML API with a Linux container for customer apps and (v) Power efficiency – with an innovative design that has no mid-plane/backplane resulting in 15% greater power and cooling efficiency.

The kaon shows the “see-through” design of the Nexus 9500 without the traditional mid-plane design.
To see the 3D design of the Nexus 9500 click here 

The Nexus 9000 is designed from ground-up to be ACI ready with a combination of merchant silicon and Cisco custom ASICs to deliver the “5 Ps”.

In addition to the Nexus 9000, keep a look out for the Application Virtual Switch (AVS).


4. 40G BiDi Optics*

As customers migrate to 10/40G over the next few years, the cost of laying new fiber and overhauling the optics is a tremendous drag and raises barriers for 40G adoption. I wrote about multi-layered innovations – this is one of them at a component level. The 40G BiDi lets customers preserve their existing 10G cables, resulting in tremendous time savings, cost savings (labor and fiber) as well as improved time to market for the upgrade. Bandwidth upgrades is one of the top reasons that drive network refreshes, and this innovation (a Cisco exclusive) produces remarkable results

5.  The Partner Ecosystem

It is not possible for one company to address all the challenges manifesting in the data center on its own, no matter how revolutionary the architecture is or how radical the innovations are. This is where a rich ecosystem of partners have stepped in(see the technology leaders rally here), each of them market and innovation leaders respective domains, to make the vision of ACI all the more real and consumable.

Their vision and commitment is reflective both of the shared vision and commitment to transform the data center infrastructure, as well as reflective of the open architecture of the ACI approach in general, building on the principles of the Cisco Open Network Environment (Cisco ONE), but also taking it to other aspects of the infrastructure. You may expect to see a lot of the demos  as the APIC becomes generally available next year,  even as services offerings around ACI become much richer, as evidenced by Scott’s blog link below.

Please stay tuned to this blog space and the website for additional information over coming weeks and months. As always we would like your comments and constructive criticism as we together help redefine the power of IT.

(*) Click on the Infographic to enlarge or download it 

Additional Resources 

John Chambers Blog “Transforming IT for the Application Economy”
Chris Young Blog on ACI Security
Scott Clark Blog on ACI Services
ACI PR announcements
ACI Partner Ecosystem 


Tags: , , , , , , , , , , , ,