Container Management with Docker
|
Overlaid Layers
Thanks to the overlay functions supplied by aufs [10] or Device mapper [11], Docker has an efficient back end to store the containers. Each container, to put it simply, is stored as a transparent layer. The same functions are similar to thos employed when writing a Live CD (Figure 4). Quite recently, Btrfs [12] has been playing a role as well with its snapshot function. Details on the different filesystem functions for storing and managing containers can be found in Red Hat's documentation for their Atomic [13] project, a container operating system [14] still being developed.
Before I start going into details, I'll clarify things a bit. A Docker image is just a read-only template, a kind of recipe. A base image [15] provides rudimentary operating system functions and can have a parent-child relationship with other images that interact with the base image on various layers. For example, this could be an ownCloud image, which I'll get to later.
As a rule, containers consist of several images that contain all the functionality and dependencies that the content of the container (an operating system, application, or service) needs at runtime. The basic handling of Docker containers is through the command-line interface, relying heavily on Git [16] and easy to learn. The "Containers, Images, and Docker Files" box provides more information on how things work.
Container, Images, and Docker Files
To better understand how an image is created, look at the description of the ownCloud image in the Docker hub [17]. The Dockerfile tab shows the important parts of the image. An image also includes licensing information and a README file that goes into specifics.
It's pretty clear what this image does and why it's a template. The FROM line at the top identifies the base image being used – in this case, Debian Stable (Figure 5). When you download the Dockerfile, each line beginning with RUN is run in sequence. First, wget gets installed together with an archive key, the ownCloud repository and its dependencies.
The template then sets pre-installations of the Apache webserver, which, in this case, is in the base image or needs to be installed later. You can then load this complete image into a container where the base image is also loaded. Containers are by default isolated through "deny all" policies and can't communicate with the outside world. If you do want this, use EXPOSE parameters with port numbers to expose. The VOLUME parameter allows connecting with the host's filesystems or another container at a specified location. Since Docker 1.2, the containers' rights and privileges can be fine-grained controlled [18] so that the combination of cgroups and namespaces can meet their full potential.
A container thus consists of a base operating system and apps and metadata added by users. The image tells Docker what the container should include and, where appropriate, what command is used for startup. In the simplest case the container starts right away with a shell, although far more complicated processes can occur, such as automated test runs.
Because images are read-only, Docker opens a layer for the containers with help from a Union filesystem, such as aufs or Device mapper. These layers are writeable (wr), and the application runs within them. If a change occurs, such as an update, another layer is applied. Instead of changing the entire image at runtime, changes at runtime are integrated only when you convert the container into an image again using
docker commit
There are two kinds of Docker containers: interactively started or daemonized. If the container is to execute a predetermined sequence and then stop, start it interactively with
docker run -t -i debian:sid </command_1;command_2>
A container started with
docker run -d debian:sid /bin/bash
waits in the background for input.
Easier Control with Panamax
Even though Docker's command structure is quite simple and is based largely on Git, it quickly gets unwieldy with many containers. Panamax [19], the first graphical interface for managing Docker containers (still in beta), solves this problem (Figure 6).
To install the browser-based GUI, you first need to install Vagrant version 1.6.5 or higher and VirtualBox 1.4.2 or higher (Figures 7 and 8). If the versions aren't available to you, you'll need to get the Vagrant [20] and VirtualBox [21] packages in Debian format off their websites. Panamax can currently be installed only on Debian-based distributions or the Apple Mac OS. It's really only a question of time that stable versions of further package formats are supported.
Once the prerequisites are met, install the software using
$ curl http://download.panamax.io/installer/ubuntu.sh | bash
This installs the minimum CoreOS container operating system and Panamax in VirtualBox (Figure 9). The process runs automatically and finishes by having you enter the http://localhost:8888 URL in your browser, which opens the Panamax interface. There you can search for images, install, run, modify, manage and save (Figures 10 and 11). To make the best use of Panamax at this stage requires internalizing the Docker principle. Particularly important is distinguishing between an image and a container. With the help of Panamax and its documentation, you can get containers up and running quickly without entering any by hand (Figure 12). If an error occurs, you simply delete the image and start over.
« Previous 1 2 3 Next »
Buy this article as PDF
Pages: 5
(incl. VAT)