Containers is the current buzzword in terms of application development and cloud computing and has been ever since Docker achieved notoriety about a year and a half ago. We've reported on containers previously and predicted that end users will also benefit from this new free technology. In this article, I'll explore how things might look in time.
Some readers might remember the first appearance of the free VirtualBox hypervisor in 2007 that made Linux distribution testing easier. Previously, there was the VMware Player, but VMware is not free and there was the constant problem of update breaking and having to wait for the latest kernel patch to make things work again.
With VirtualBox and help from DKMS [1] such problems are a thing of the past, and ISO images of the newest distribution can be prepared for testing in a matter of minutes. The only limitation to the number of simultaneously bootable distributions is main memory and processor speed. Here, container solutions can make headway with end users, because their resource superstructure is much less resource-hungry than that of hypervisors. Hundreds of containers can theoretically share the same host kernel. Containers virtualize at the operating system level, whereas with hypervisors this happens on hardware.
In this article, therefore, I'll test how easy it now is to get a container to run on a distribution, with the graphical output redirected to the host.
Containers aren't as new as many think; in fact, Linux Containers (LXC) [2] have been around since 2006 but didn't acquire a large user base until the spring of 2013 when Docker [3] appeared on the scene. Outside of Linux, similar technologies already existed, such as FreeBSD Jail [4] or the Solaris Zones [5] that were originally built on Jails.
The granddaddy of all these is, of course, the well-known Unix chroot [6], even if the application scenarios are different. In a chroot, it might be desirable for a process to "break" from an environment, which is something that can't happen in containers.
The current hype about Docker and containers in general comes from the fact that increasingly specialized applications are making their way into the business environment, pushing out the more static software stacks of the past.
The easily copied, lightweight, and portable containers are particularly well-suited for developing applications as modules from local computers, to servers and the cloud. What I'm interested in here, however, are the qualities of the LXC and Docker container solutions for end users.
I will start with LXC and attempt to open a container on an installed Ubuntu 14.10 with a CentOS image on it (Figure 1). Ubuntu will be used as the host, because the default kernel and system settings are the most user-friendly.
Here also the lxcbr0 network bridge is already integrated, which requires somewhat more configuration in Debian. The goal is to get to a functioning container up and running as easily as possible.
An essential ingredient of LXC is the control groups kernel function (Cgroups), with which processes and groups of processes can be controlled and managed. Ubuntu's current kernel has the corresponding functionality automatically activated, so modifying fstab is no longer necessary. The only limitation of LXC is that compatible distributions run only with the guest kernel in a container. LXC allows only Linux guests on a Linux host. You are ready to run Ubuntu directly after installing the lxc package. You can test this with the lxc-checkconfig command (Figure 2). You should see enabled at the end of each output line.
Before you can deploy a container, you need to decide which operating system to run on it. LXC provides some templates to make it easier. You can find them with the ls /usr/share/lxc/templates command. A few of the templates listed here already use systemd, which currently causes problems in a LXC container. These should be fixed soon, however. If you want to test operating systems such as openSUSE, Arch Linux, or Fedora, you can switch to the sysvinit container if problems arise as described in the Debian wiki [7]. If the distribution matches, Debian offers itself as guest, because the template installs the latest release, currently Debian 7 "Wheezy," which still uses sysvinit.
To start the container and create a Debian based system, use
$ sudo lxc-create -n Wheezy -t debian
where you can substitute another name for Wheezy. After about 10 minutes, the template will install a Debian system via Debootstrap [8].
The template always installs the stable release of a distribution, which you can change by specifying the release. You can build a Debian from an unstable repository with
$ sudo lxc-create -n Unstable -t debian --release sid
You can create a development version of Ubuntu 15.04 with
$ sudo lxc-create -n Vivid -t ubuntu --release vivid
The templates are built comprehensively and are easily adaptable to your purposes (Figure 3).
LXC puts new containers in /var/lib/lxc/<Container-Name> , with the downloaded packages used during preparation in /var/cache/lxc so that further containers based on the same template can be placed within seconds.
Start a container using
$ sudo lxc-start -n <container_name>
You will arrive directly in the container terminal; an added -d will start the container as a daemon in the background. The required username and passwords are indicated at the end of container creation (Figure 4). You should at least change the password. To get a status of the content of all the containers, run:
$ sudo lxc-ls --fancy
You'll see something like what is shown in Listing 1. You can now start the container using
$ sudo lxc-start -F -n wheezy
Listing 1
List Your Containers
$ sudo lxc-ls --fancy [sudo] password for ubuntuuser: NAME STATE IPV4 IPV6 GROUPS AUTOSTART ---------------------------------------------- wheezy STOPPED - - - NO
The -F parameter will start Debian Wheezy in the foreground, right there in the terminal you are using. (See the "Beating the Bug" box for troubleshooting tips.)
Beating the Bug
The version of LXC I used seemed to have a bug in it, which, at press time, was still unresolved. When I tried to start the container using
$ sudo lxc-start -F -n wheezy
the program coughed up a bunch of errors. I solved the problem by including
lxc.aa_allow_incomplete = 1
in the file /var/lib/lxc/wheezy/config .
You now have a base system in the container that you can build on. Most end users would probably want to run some graphical applications. Meaningful applications are, for example, programs you don't trust, but might need from time to time. These might include Skype or Google Hangouts; even browsers could fall into this category.
There are several ways to get graphical output from an application running in a container. To get a full understanding of how this happens, you can follow the instructions of LXC package maintainer Stéphane Graber [9] then complete the configuration. Other possibilities are using Xspra [10] or Virtenc [11]. In this example, I'll use X2go [12], because it seems to be the easiest solution.
After logging into your container, you first add a new user with
# adduser x2gouser
then you can install a desktop environment of your choice. If you want KDE, for example, you could do:
# apt-get install kde-standard
Of course, you could go for something lighter, such as LXDE for testing instead:
# apt-get install lxde
Add the line deb http://packages.x2go.org/debian wheezy main to the /etc/apt/sources.list.d/debian.list file and save the file. Then, add the repository key with
$ sudo apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E
After doing an apt-get update , install the X2go server with
# apt-get install x2goserver
This step will also create an x2gouser user and group and start the server.
Next, install the X2go client on the Ubuntu host:
$ sudo apt-get install software-properties-common $ sudo add-apt-repository ppa:x2go/stable $ sudo apt-get update $ sudo apt-get install x2goclient
A PPA is available for the latter.
You then start the X2goclient on the host and accept the session settings (Figure 5). You can pick the name of the session you want and you add the container's IP address (which you can get with lxc-ls --fancy ) in the Host field. Note that each container has its own IP address. Use lxc-info for additional details (Figure 6).
The Login is the container's username (root if you haven't added any other users) and the Session type is the desktop environment. On the second tab, move the Connection type slider all the way to the right to LAN . On the Display tab, set the display size. You can now enter the graphical desktop (Figure 7). All changes you now make apply only to the container. Stopping the container requires that you run
$ sudo lxc-stop -n <container_name>
Replacing stop with destroy deletes the container. LXC has several other applications that are easy to find on the Internet. Ubuntu 14.10 has complete documentation for LXC [13].
The difference between LXC containers and those on Docker is that Docker provides a lot more functionality. This includes features ranging from interfaces for apps for container orchestration in the cloud, to those for managing entire clusters. These functionalities, however, aren't as important for the end user. If you take into account that Docker until version 0.9 was based on LXC before in-house development deferred to Libcontainer [14], it's clear that LXC is better adapted for end users to export the X display from GUI applications (see Figure 8).
Other solutions are available instead of Docker, but they require more effort without necessarily adding value. The existing solutions depend on SSH, Xpra and Xephyr, and also X2go. As with LXC, the greatest effort is in the setup, after which containers can start in a matter of seconds. Several guides are available if you want to test these solutions for yourself.
The rapid development of Docker in the past 18 months has put LXC containers into the spotlight. Before that, LXC was a little-noticed tool. LXC got a noticeable development push thanks to the fact that Docker became easily available to end users. Users who only occasionally start VirtualBox or KVM to manage a distribution are likely to shy away from using LXC. If you manage several images simultaneously, however, you will notice that LXC handles hardware resources much more economically than a full virtual machine. You can also use LXC as a sandbox for untrusted applications running independently of the host system.
Containers are often deemed insecure, and hypervisors still have an advantage in this respect. Many specialists are currently applying themselves to the task of securing containers. Daniel J. Walsh of Red Hat has discussed this topic in length.
Those working in safety-critical environments can run many LXC containers in a virtual machine, which unfortunately can somewhat increase the network structure's complexity. However, with kernel 3.12 and the use of kernel namespaces, containers with LXC 1.0 can be started under users other than root, thereby preventing changes in the host through the container.
It's difficult to ascertain whether the use of containers will move further into the end user realm. This would require the setup to be made a bit easier. Meanwhile, VirtualBox and KVM have their own setup hurdles. KVM users should have no problem with LXC. Another new player in the field is systemd-nspawn [15], which presumably might remain an enhanced replacement for chroot, but not without more of a manual effort in virtualizing operating systems.
Infos