Avatar

Authors
Ken Owens (@kenowens12), Keith Chambers (@keithchambers), and Jason Plank (@_japlank_)

Over the last few years, we’ve heard a lot about ways of designing new software applications. Out of this, we’ve heard a lot about “Microservice Architecture” as a way to design these applications as individual components to make up an actual application. Refer back to this recent blog regarding the impact of microservices and containers on application enablement for the enterprise. Many attempts to define the architecture were undertaken, and in general the complexities of software platform and different viewpoints on the underlying components necessary have not resulted in an agreed solution.

There are several aspects that many agree on:

  • the ability to deploy applications utilizing resources across multiple datacenters (and even clouds),
  • deploying in a decentralized control model,
  • supporting intelligent endpoints,
  • heavy automation, and
  • the on-demand nature of deploying these services to support business requirements and scale.

As you can imagine, one of the very popular conversations that we have with customers (application development folks) revolves around the ability to deploy multi-datacenter distributed applications. As such, one of the key services that we wanted to make sure that we offer as a service on top of our Cloud Platform at Cisco is the ability to deploy microservice applications that utilize docker containers and integrate to the frameworks and development lifecycle that currently exists, while providing a transformational CICD integration platform for the future application development.

As we were looking at components to include in the framework of our architecture, we wanted to make sure that the developer experience across all Cisco cloud, private cloud, and our Intercloud partners platforms remained not only consistent, but that our customers could deploy these applications across multiple datacenters (Intercloud enabled) at a time.

Architectural Components

Consul: Consul itself is used for Service Discovery and distributed configuration management and is datacenter-aware. In our architecture, it is primarily used for coordination of service discovery, specifically using the inbuilt DNS server.

Marathon: Marathon is the first such framework to be launched to run alongside Mesos. Marathon is essentially a scheduler that uses the Chronos scheduler, which runs as a Marathon task. The Chronos piece of the architecture is capable of receiving offers (resource) from Mesos and has the capability to start tasks in Mesos.

Mesos: Apache Mesos is used to abstract resources from physical/virtual machines. The resources we are talking about are CPU, memory, storage; etc. You can think of Mesos as a kernel of sorts, but its role in the architecture is to provide abstraction of resources and API access to applications for resource management across datacenter and cloud environments.

Mesos requires Apache ZooKeeper as a state machine – keeping track of configurations, naming, synchronization, and providing services. All of these services are consumed in some way or another by distributed applications. In our architecture we specifically focus on coordination among Mesos and Marathon nodes.

Registrator: As you can imagine, the registrator will watch for new docker containers and create entries for them in Consul. This makes them discoverable and easily manageable.

Now that we have identified some of the components used in the architecture, we’ll provide some excerpts from github to give you a view with pictures.

Control Nodes

Control Node

Control nodes are responsible for managing resources in a single datacenter. Each control node runs Consul for discovering services, Mesos leaders for resource scheduling, and Mesos frameworks such as Marathon.

It’s best to deploy in clusters of 3 or 5 control nodes to achieve the highest availability of services in a single datacenter.

Compute Nodes

Compute Node

The compute nodes, as you can imagine are deployed to launch containers and other Mesos-based workloads. Registrator is used to update Consul, as containers are provisioned/DE provisioned.

Single Datacenter HA

Single Datacenter
The base platform contains control nodes that manage the cluster and any numbers of compute nodes. Once deployed, you can launch Docker containers with Marathon. The application containers will automatically register themselves into DNS (via Registrator and Consul) so that other services can locate them.

Multi-datacenter HA

Multiple Datacenter

You can also deploy to multiple datacenters. Each datacenter contains a set of control nodes and computes nodes.  The architecture is “share nothing” with the exception of Consul. Consul nodes for all datacenters are automatically joined together to form a single WAN gossip pool. This enables application to local instance in the same datacenter and instances other datacenters with DNS or the Consul exposed API.

The providers that we currently support with this project are OpenStack and Vagrant.

We have recorded two demos for you:

Getting started with microservice-infrastructure on OpenStack

Getting started with microservices-infrastructure on Vagrant
https://youtu.be/0riMpt_zUDY .

This second demo is more OpenStack centric as it pertains to our microservice architecture.

Find step-by-step instructions on deploying your first application services to @CiscoCloud at:

https://github.com/CiscoCloud/microservices-infrastructure/pull/74

As you can see, we’ve brought a framework for deploying next gen applications that utilize containers to our Cloud. Stay tuned to the Cloud blog for updates and additional demos of launching and consuming these services!

We plan to enhance and expand the architecture and certainly would love to see support from the community. Please point your browser towards https://github.com/CiscoCloud/microservices-infrastructure/issues with any feedback!

Thank you and stay tuned for updates!

 

 



Authors

Kenneth Owens

Chief Technical Officer, Cloud Infrastructure Services