Containers excel during the deployment stage by simplifying the deployment process and reducing the challenges that are likely to be faced. To understand how containers simplify the deployment process, we can look at a shipping analogy. In the traditional shipping approach, there was no standardized container, there were different types of containers such as crates, barrels and boxes of different sizes which were loaded onto ships by people. Unloading was to be carefully done by people to avoid the possibility of items falling over. When standardized containers were introduced to shipping, things changed. Loading and unloading could be done logically without the risk of losing some items. The Docker deployment model is similar to the container shipping model. Every Docker container provides a common interface and the tools connect them to the servers without the need to know how they operate internally. Once your application has been developed, moving it to a single server will be taken care of by the Docker tooling, while moving it to different servers will require some tools external to Docker.
When using Dockerized applications, there are three steps that need to be followed when migrating to a production environment. The first step is building and testing your application on the development environment. The second step is building an image so that it can be tested and deployed. The final step is deployment of an image to a server. As your work-flow matures, you will realize there are no distinct steps.
The simplest way to deploy an application into production is by hand, however this simplicity comes at a cost of unreliability. In a test and development environment, the docker pull and run commands are adequate. In a production environment, you need to build reliability into the process. The two elements of a deployment process that are critical are repeatability and ability to handle application configuration on different environments. Docker client does not support deployment at scale, so to implement scalability, you can use Docker swarm or other tools.
When deploying on many Docker daemons, you need to rely on orchestration tools that help in managing configuration and deployment. Some tools that are available for orchestration are Helios from Spotify, Centurion from New Relic and Docker tooling developed by Ansible. These tools require the least amount of effort to set up.
To interact with servers where containers are running, Helios provides both an HTTP API and a command line interface (CLI). Helios maintains a history of important events for you such as deployments, restarts and any changes in application versions. An Apache Zookeeper cluster is a requirement when using Helios.
Centurion helps you manage host variables, volume and port mappings when moving containers from a registry to multiple hosts. When using Centurion, there are two steps in the deployment process. The first step is building and pushing a container to the registry and the second step involves Centurion pulling containers from the registry to the different hosts. Ansible goes providing orchestration functionality to provide server management tools. Centurion and Ansible require a Docker registry to function.
Because centurion is provided as a Ruby Gem you need an up to date installation of Ruby. To install Ruby on ubuntu, use this command sudo apt-get install ruby-full
If you have the latest version of Ruby nothing will be installed, otherwise the latest version will be installed.
To install Centurion, use this command gem install centurion
After Centurion has been installed, you can use centurionize for structure set up and creation of a sample configuration file. The centurionize command is used as shown below.
centurionize -p firstproject
Executing the centurion command has several effects. One effect is to ensure Centurion is in the Gemfile and if it is missing, it is added. Another effect is to make sure Gemfile is available. Another effect is to create a project structure or scaffolding. The final effect is ensuring the existence of /config/centurion directory.
Even though a base configuration will be scaffolded, you still need to customize your configuration. To specify configurations, you use a rake task. Some of the configurations that can be specified are; container names, container labels, container host names, cgroup resource limitations, and network modes.
Open the configuration file to see some of the settings that are included. The command below will open the configurations.
sudo gedit /home/sammy/config/centurion/firstproject.rake
It would not be possible to cover all the aspects of Centurion in a short blog like this one. A comprehensive resource on Centurion is available here https://github.com/newrelic/centurion.
The three tools we have discussed in the previous section are concerned with coordination of multiple daemons. Another set of tools are distributed schedulers which are concerned with enabling the conversion of a network into a single computer. Policies on how the application is expected to run are specified and the system is given autonomy on how to run instances. Any application failures are handled by the scheduler. Some tools in the category of schedulers are Apache Mesos, Kubernetes and Fleet. Kubernetes and Mesos are the more popular tools.
In this article, we introduced the Docker model used in deploying applications. We noted deploying to single host is well taken care of in the Docker tool set. However deploying applications to multiple hosts is quite demanding and requires tools outside of Docker. The first type of tool we discussed were orchestration tools that help in management of deployment to multiple tools. The second category of tools that we discussed was schedulers such as Apache Mesos and Kubernetes.