If you followed my previous post you are now quite familiar with how Docker containers work. But we just covered the way to work with them manually. When you go into production environments you quickly realize manual management is not really a viable option.
There are multiple reasons to avoid manual management of containers, including:
- Reliability: manual human interaction is prone to errors
- Scalability: number of containers may go really high really quick (4 years ago Google was already running 2 billion containers per week)
- Elasticity: as workload requires, containers will need to be dynamically destroyed or created to accommodate those load requirements
Automatically Manage All Aspects of Your Application Containers
Wouldn’t it be nice to have a tool that automatically manages all aspects of your application containers? That is exactly what a container scheduler does. And more!
A container scheduler is responsible for multiple tasks, including:
- Making sure application containers run as per the desired state (declarative format), including defined constraints and available resources
- Providing fault tolerance and high availability, adapting to any event and making sure the desired state is (almost) always fulfilled
- Creating a pool of available resources (servers) and offer them as an abstracted layer
If you think about it, this would be part of the responsibility you would expect from an Operations team, so the scheduler is supposed to work like the best Ops team.
There are multiple options to use as your container scheduler, with the most common ones being Apache Mesos, Docker Swarm, and Kubernetes. All of them are valid and useful, so I would encourage you to explore them and see which one fits better your own requirements.
You might even think the best option to manage Docker containers should be their own native scheduler, Docker Swarm. Actually it follows quite an intuitive approach that makes Swarm easy to learn and use, but it does not provide the level of flexibility Kubernetes has. However during DockerCon last year it was announced that Docker would natively support Kubernetes, as it has become quite the industry standard.
Kubernetes is the de facto Standard for Container Scheduling
Kubernetes (aka k8s) is an open source orchestrator, donated to the community by Google, who has been using it internally for many years. It has become the de facto standard for container scheduling, and can scale up to the biggest deployments or down to a cluster of Raspberry Pi boards (more on this in subsequent posts).
It provides everything you need to build and deploy scalable distributed systems, based on containers communicating via APIs. It also includes all features required for scalability, reliability and high availability, even when migrating applications to new versions.
Learning Kubernetes will need you to spend some time first going through learning materials that describe its foundational principles, and only later examine how to install and use it for your deployments.
My own recommended path of learning would be the following:
- Deploying and Scaling Microservices with Docker and Kubernetes, by Jérôme Petazzoni – Great introduction to Kubernetes concepts and how the overall solution works.
- Kubernetes Tutorials – Comprehensive list of tutorials, showing how to accomplish useful goals with real-world examples.
- Kubernetes: Up and Running – Fantastic reference book to get yourself familiar with Kubernetes, from some of the key people involved in its original development.
You might not have infrastructure (yet) to build your own Kubernetes cluster. Don’t worry, we’ve got you covered! Take a look at the following options, available for free in the Interweb:
- Cisco DevNet Kubernetes sandbox: comprehensive environment with 1 master + 3 worker nodes, based on VMs, pre-configured, accessible via SSH, valid for long testing (multi-day reservations) and reliable. It also includes other interesting features, like Contiv for container networking.
- Katacoda Kubernetes playground: very basic k8s cluster with just 1 master + 1 worker nodes, pre-configured, accessible via web-based terminal, and valid for quick tests, as the environment is very short-lived (5-10 mins).
- Play-with-kubernetes: basic k8s cluster with multiple Docker-based nodes (Docker in Docker), you will need to initialize all different nodes and required networking yourself, accessible via web-based terminal, short-lived (4-hours).
Now you have all the materials and environment to get really fluent on Docker and Kubernetes. Please invest some time during the following 2 weeks on it, and we will start leveraging everything you learn in my upcoming posts. I guarantee you… it will be worth it!
We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!
Visit the new Developer Video Channel