Adventures with Docker #3: Creation of custom images. (outdated)


Please note - this blog post is last updated in 2015, it's outdated right now. Soon we will prepare update.


In previous part of this series, I introduced general usage of containers and ready to go images. What to do, if existing image doesn't fit your needs?

Do we really have to slowly implement all configuration changes manually, installation of packages every time, when we are using almost “ready” image? In previous post I showed command docker commit as a way for creation new images with saved state. Unfortunately, in longer run, this solution is not convenient, for example how to track all changes, and history of images?

In this post, I'll show how to build images (almost) from scratch, and how to share those images in a team or bigger community of users and how to automatize process of image build by integration with GitHub / BitBucket.


Similar to Vagrant, we need some way for automatic configuration of image. For Vagrant, most popular are Puppet and Chef tools. For Docker we are using bit different approach – manifest file – Dockerfile. Our tool will use this manifest during build process, and will use it for configuration of our image. Let's check, how such manifest file looks like:

FROM ubuntu:trusty
MAINTAINER Jarek Sobiecki 

# Install packages (this is generic version with all required extensions).
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
  apt-get -y install supervisor git apache2 libapache2-mod-php5 mysql-server php5-mysql\
  pwgen php-apc php5-mcrypt php5-curl php5-xhprof php5-xdebug php5-memcache php5-gd curl unzip

# Configure open ssh
# See: http://docs.docker.com/examples/running_ssh_service/ for more details
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN echo 'export PATH="$HOME/.composer/vendor/bin:$PATH"' >> /etc/profile

# Add image configuration and scripts
ADD scripts/start-apache2.sh /start-apache2.sh
ADD scripts/start-selenium.sh /start-selenium.sh
ADD scripts/run.sh /run.sh
RUN chmod 755 /*.sh
ADD configs/php/php.ini /etc/php5/apache2/conf.d/40_custom.ini
ADD configs/php/php.ini /etc/php5/cli/conf.d/40_custom.ini
ADD configs/supervisor/supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD configs/supervisor/supervisord-sshd.conf /etc/supervisor/conf.d/supervisord-sshd.conf
ADD configs/supervisor/supervisord-selenium.conf /etc/supervisor/conf.d/supervisord-selenium.conf

# config to enable .htaccess
ADD configs/apache/apache_default /etc/apache2/sites-available/000-default.conf
ADD configs/apache/apache_default_ssl /etc/apache2/sites-available/000-default-ssl.conf
RUN a2enmod rewrite
RUN a2enmod ssl

# Install composer and newest stable drush
RUN curl -sS https://getcomposer.org/installer | php -- --filename=composer --install-dir=/usr/local/bin
RUN git clone https://github.com/drush-ops/drush.git /usr/local/src/drush && \
  cd /usr/local/src/drush && \
  git checkout 7.x && \
  ln -s /usr/local/src/drush/drush /usr/bin/drush && \
  composer install

# Install tools required for behat testings
# Firefox
# Selenium
# Xvfb and x11vnc
RUN apt-get -y install xvfb x11vnc firefox openjdk-7-jre openbox
RUN mkdir /usr/local/lib/selenium && curl http://selenium-release.storage.googleapis.com/2.46/selenium-server-standalone-2.46.0.jar -o /usr/local/lib/selenium/selenium.jar

# Install helper tools
RUN apt-get -y install vim

#Enviornment variables to configure php

EXPOSE 80 22

# Add volumes for Apache logs
VOLUME  ["/var/log/apache2" ]

CMD ["/run.sh"]


This file is manifest actually used in one of our open source projects – FoodCoop Systyem. Let's go step by step and explain all commands used in this file.


FROM ubuntu:trusty

We begin our manifest with that command. It tells, that our image is based on second image - “ubuntu:trusty”. We can think about this like a multi-layered cake. We are taking ready cake - “ubuntu:trusty” and each step we will execute will add next layers to our lovely cake.

MAINTAINER Jarek Sobiecki 

Boring : Maintainer info.

ENV DEBIAN_FRONTEND noninteractive

ENV command allows setting environmental variables in a container. Next commands in Dockerfile (and scripts) will depend on those variables. In our case we are setting DEBIAN_FRONTEND, so apt-get command executed later will not launch interactive configuration tool (using ncurses library). Thank to that build process is automatic and doesn't require input from user. If we miss this variable, then installation process can stuck while waiting for user.

RUN apt-get update && \
  apt-get -y install supervisor git apache2 libapache2-mod-php5  \
  mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl \
  php5-xhprof php5-xdebug php5-memcache php5-gd curl unzip

RUN command will create new container (using current image with all previous steps), executes command (docker run), and after that commit changes(docker rm). So, this command automatizes few manual tasks. Argument of this command is list of bash commands.

ADD scripts/start-apache2.sh /start-apache2.sh

This command can be bit confusing. It will copy file from source location (on our workstation) into location in container. Usually we are using this command in order to copy shell scripts or configuration files.

EXPOSE 80 22

This command says, that in case of container linking (with –link option), current container will share ports 80 and 22 to other containers. In order to use those ports from docker server (our workstation), we need to use docker run option - -p or -P. This second switch will share all container exposed ports (annotated by EXPOSE command). Please note it will not be 1-1 exposition. Docker will use random host ports, for example port 80 from container can be shared as 32769 on host machine.

VOLUME  ["/var/log/apache2" ]

This command says, that /var/log/apache2 is a volume. It means, that changes in that volume will not be tracked, and for example they will not be part of commit (docker commit). What is interesting, is a fact, you can share volumes between containers.

CMD ["/run.sh"]

Directive CMD says, that default command, that should be executed on container start is a ./run.sh. This command is quite helpful, when you don't want force users to use program name a argument of docker run command.

Those are all essentials (IMHO) directives required to build Dockerfile file.

For full list of all commands, that are available at Dockerfile, you can check reference documentation here.

Unfortunately, there is some additional work to do – it's not enough to install all packages, and copy configuration to valid places in container.

Docker image is not full operating system, it misses ie. subsystem required for starting all servers (like SystemD or Upstart). It comes from fact, that container is only a bunch of isolated processes. We have to make sure, that our processes will be executed at container start, and in case of process death (in case of failure) re-exected.

Quite popular tool, that can helpful in that task and is frequently used in containers is supervisord - it's a daemon, that checks if certain services (like sshd or apache2 are active, and if processes die, it will try to reexecute them. Detailed information how to use supervised can be found here: tutaj. Docker containers are quite flexible, and you can use different tool, if you want, but for our use case there are enough.

In case of our image, I used supervisord and few scripts responsible for particular services. For details, please check image repository

When everything is ready, you need to execute docker build command in order to build a image.

$ docker build -t [image name]:[tag] [directory with Dockerfile]

Sending build context to Docker daemon 291.8 kB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu:trusty
 ---> b39b81afc8ca
Step 1 : MAINTAINER Jarek Sobiecki 
 ---> Using cache
 ---> e2d7f88c6a9b
Step 2 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> b10de0795c2c
Step 3 : RUN apt-get update &&   apt-get -y install supervisor git apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl php5-xhprof php5-xdebug php5-memcache php5-gd curl unzip
 ---> Using cache
 ---> 7d58d087cffa
Step 4 : RUN apt-get update && apt-get install -y openssh-server
 ---> Using cache
 ---> 95176c32e8c7
Step 5 : RUN mkdir /var/run/sshd
 ---> Using cache
 ---> 00bad0caa3d8
Step 6 : RUN echo 'root:root' | chpasswd
 ---> Using cache
 ---> 79043a4d6391
Step 7 : RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
 ---> Using cache
 ---> 6d5cf5b8e652
Step 8 : RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
 ---> Using cache
 ---> 5cc4ae3d98ea
Step 9 : ENV NOTVISIBLE "in users profile"
 ---> Using cache
 ---> 1055de93a95c
Step 10 : RUN echo "export VISIBLE=now" >> /etc/profile
 ---> Using cache

After successful execution of this command, you can check if new image is available with docker images. We can use it as every other image downloaded from docker repository.

Image publication

We created our image, but how to share it to bigger group of people? One way is to publish it on docker repository. To do so, we need sign-up on docker hub. After that, we can create public (unlimited) or private (only one) repository with image. It's quite similar to services like GitHub or BitBucket.

Let's add new repository. For example, let's assume, we set up account “test_user”. New repository is “test_repo”. In order to share our image, first let's build it with valid repository and name:

$ docker build -t test_user/test_repo:1.0 [path to directory with Dockerfile]

Thanks to that command, newly created image will be related to “test_user” account and “test_repo” repository. 1.0 is a tag name.

After that, we need to push our image to server:

docker push test_user/test_repo:1.0

Tool will ask us for access credentials. After that, we are ready to go! Our image is accessible as any other image, and other people can use it, simple by pulling it out of repository.


Integration with GitHub/Bitbucket

Very nice feature is a integration with GitHub repository, that includes all files related to image (ie. Dockerfile and shell scripts). In order to create image, that is automaticall build, we need to merge our account with GitHub or BitBucket.

After connection our account with git repository, we can add new repository from one of those two sources.

Every time, we update our source git repository, docker hub will trigger build process that will update our image with the latest tag.

This will allow efficent sharing of environment between team members. If you introduce changes to image, all what other team members need to is to execute docker pull on their workstations and refreshing their containers.

This is not ideal solution, and in next entry I'll introduce tool docker-compose, that automatizes this process.


Written by jsobiecki
  • Toolkit
We love working alongside 
ambitious projects and people
Let's get in touch