Today, UKFast DevOps engineer Tim is here to share his expert knowledge and explain exactly what is meant by the term containerisation.
Modern servers are orders of magnitude more powerful than their predecessors. But they still take up rack space, consume power and require cooling, all of which costs money. Most modern server hardware is barely even ticking over for much of the time, so there’s an understandable desire by companies to squeeze more out of their existing hardware.
One could continue to add more and more services to an existing server, but this raises potential problems in terms of security and performance. We can get around this by using virtualisation and containerisation. Both allow much more efficient utilisation of server hardware by isolating the various applications. The differences are illustrated by the diagram below.
In a virtual machine (VM) environment, each guest has its own operating system (OS), libraries etc. These consume resources on the host. A container, on the other hand, shares the underlying OS of its host. Applications are still isolated from each other, but share resources much more efficiently.
This has important advantages:
Docker is an open-source container engine, and is the de facto standard platform within the industry. Its behaviour is controlled by a dockerfile. This is a step-by-step list of instructions telling Docker how to set up the environment in which to run your programs. Docker uses this to create an image: a self-contained executable version of your program. Docker can run this image, creating a container out of it.
Clearly the application running in a container needs to be able to interact with the world outside. So, although containers are, by default, isolated from the host system, various parts of them can be exposed.
For example, you can map network ports on the host to those in the container. It would therefore be possible to have some sort of application sitting within the container listening on a particular port and responding on another. Similarly, it is possible to map directories and files within a container to those on the host system.
Sometimes we need to have several different applications in different containers running in concert. An example of this might be something like the popular TICK stack. This is a monitoring system consisting of Telegraf, InfluxDB, Chronograph and Kapacitor. Each of these four applications would run in its own Docker container. However, we usually want to control all four collectively.
For this we can use Docker Compose. Each of the four elements of TICK would have its own Dockerfile and containers, and there would be a Docker Compose configuration file. This means, starting and stopping the entire stack becomes as simple as doing a docker-compose up command.
Docker Compose allows us to deploy groups of containers on a single host. For larger and more complex deployments, we might use a container orchestration system like Kubernetes. But that’s for another post.
Get even more expert insight from DevOps engineer Tim in our DevOps blog series.