Contain your enthusiasm – Part One: a history of operating system containers


Since being released in March of 2013, Docker has been blowing up in popularity. It'€™s currently the 60th most popular project on Github, and previously reached as high as number 16.

Docker is a project created by dotCloud (which has now renamed itself Docker) that helps easily manage containers.

Containers are kind of like virtual machines, but kind of not. The difference is that containers share the same host kernel, while virtual machines are completely isolated environments that run their own kernel.

This means containers are limited by the features of the host'€™s kernel. An obvious disadvantage of this is the inability to run, say, Microsoft Windows in a container on a Linux server. On the other hand, a Linux server that can only support 50 virtual machines could support 500 Linux containers, as all containers would share the same kernel. It'€™s a trade-off.

Take the following analogy as an example: in an office, each person'€™s desk is usually separated by cubicle walls. They'€™re pretty thin and rarely ever reach the ceiling. Cubicles are like containers: each container is segregated, but only lightly, and each shares common resources in the office.

Virtual machines, on the other hand, are like the floors of a building. While each floor still shares space in the same building, they'€™re much more strongly separated. Different companies can work on different floors, noise rarely penetrates through them and, in some cases, fires can even be isolated to a single floor.

This series will focus on containers, their history, how they work, and finally, Docker itself. Intrigued? Read on!


The Unix filesystem was developed in the 1970s, and, since then, its tree-like hierarchy has become a staple of Unix and Unix-based operating systems:

container part 1 1


Every Unix filesystem has a /, or root. This is the base of the filesystem. Second-level directories, such as usr, bin, and home, are connected to /. And those directories each have their own sub-directories, and so on and so on.

But what if you wanted a second root? And what if you wanted that second root to be on the same filesystem as the first one?

container part 1 2

The feature of creating a new root filesystem inside an existing filesystem is known as chroot or "change root". The idea behind it was to segregate a part of a current filesystem off as its own filesystem, so that any activity on that segregated portion would not affect the rest of the system. It was first introduced in Unix version 7 back in 1979. It was later added to BSD Unix by Bill Joy in 1982 to assist with development.

Using chroot

To see how chroot works, let'€™s take a look at the chroot command'€™s source code from 4.4BSD-Lite, released in 1994 (one of the final original BSD releases):

    '€¢    Line 76 shows the command stepping into a directory and then calling the chroot syscall.
    '€¢    Line 80 is executing a command, but only if the user specified one.
    '€¢    If the user didn'€™t specify, a normal shell will be executed as shown on Line 86.

Let'€™s step back to Line 76. As mentioned, the chroot command is calling the chroot syscall. This is a common pattern with Unix system architecture '€” most command-line tools are just front-ends to underlying syscalls. This is very similar to today'€™s use of web front-ends for more complex commands.

The chroot syscall is defined in the vfs_syscalls.c file on Line 520.

Line 536 is where all of the magic happens (tell your friends) '€” the file descriptor for the current directory is set to become the new root directory.

I chose to use the older 4.4BSD-Lite version of chroot for its simplicity. FreeBSD'€™s current chroot command and syscall are a little more complex. Here is the GNU coreutils'€™ current implementation of the chroot command which is used on Linux. And here's the corresponding Linux kernel's chroot syscall.

I'€™ve listed below a few examples of chroot in action. You can do these exercises on any modern Linux distribution. Ubuntu 12.04 was used for this writing:

root@jttest:/home/ubuntu# mkdir test
root@jttest:/home/ubuntu# chroot test
chroot: failed to run command `/bin/bash': No such file or directory

For the first example, chroot failed because it wasn'€™t able to find the bash shell. This highlights an important concept of creating a new root filesystem: you need to remember that the new filesystem has no access to anything from the original filesystem, including any commands. This means that any command you want to use in the chroot'€™d filesystem must be in the new filesystem.

So, let'€™s add bash and try again:

root@jttest:/home/ubuntu# mkdir test/bin
root@jttest:/home/ubuntu# cp /bin/bash test/bin
root@jttest:/home/ubuntu# chroot test
chroot: failed to run command `/bin/bash': No such file or directory

Still failing'€¦ this time it'€™s due to Linux'€™s use of dynamic libraries. To account for dynamic libraries, all libraries used by a command must also be copied to the chroot. To see what libraries are required, use the ldd command:

root@jttest:/home/ubuntu# ldd /bin/bash =>  (0x00007fff4e5ff000) => /lib/x86_64-linux-gnu/ (0x00007fd5a43bd000) => /lib/x86_64-linux-gnu/ (0x00007fd5a41b9000) => /lib/x86_64-linux-gnu/ (0x00007fd5a3df9000)
    /lib64/ (0x00007fd5a45ea000)
root@jttest:/home/ubuntu# mkdir test/lib test/lib64
root@jttest:/home/ubuntu# cp /lib/x86_64-linux-gnu/ test/lib/
root@jttest:/home/ubuntu# cp /lib/x86_64-linux-gnu/ test/lib/
root@jttest:/home/ubuntu# cp /lib64/ test/lib64/
root@jttest:/home/ubuntu# cp /lib/x86_64-linux-gnu/ test/lib
root@jttest:/home/ubuntu# chroot test

Hey, it worked!

bash-4.2# ls
bash: ls: command not found

Of course, the ls command failed because it doesn'€™t exist in the new filesystem. To enable it, copy the command into the chroot and use ldd to resolve any library dependencies.

Let'€™s copy over the pwd command:

bash-4.2# exit
root@jttest:/home/ubuntu# cp /bin/pwd test/bin
root@jttest:/home/ubuntu# ldd /bin/pwd =>  (0x00007fffe4ce7000) => /lib/x86_64-linux-gnu/ (0x00007f0fb369f000)
    /lib64/ (0x00007f0fb3a64000)

Coincidentally, all of pwd'€™s libraries are the same as bash, so no resolution is needed. This final example will solidify the effect of chroot:

root@jttest:/home/ubuntu# pwd
root@jttest:/home/ubuntu# chroot test
bash-4.2# pwd

And there you have it: what was once /home/ubuntu is now /.


So is that all there is to containers? Not by a long shot. These were just small examples highlighting simple chroot environments. But chroot can be thought of as the foundation to containers. No matter how complex container technology gets, it's still doing the same fundamental action that chroot set out to solve back in 1979 on a filesystem developed in 1970: segregating an existing computing resource from the original resource.

None of these examples accounted for devices, networking, process management, user management, etc. In addition, creating a chroot environment, even these small ones, was manual and tedious. And finally, a chroot is not a guaranteed containment '€” it'€™s trivial to be able to break out of one. This is definitely bad if you need your chroot to be secure.

In Part 2, we'll look at container projects that have accounted for these specifics. These projects are all predecessors to Docker.