10 May 2017, Martin Ahrer

This is part 1 of a series of posts covering Docker in a Continuous Delivery environment.

Today I’m showing how simple it is to setup a continuous delivery build pipeline infrastructure using Docker. In an upcoming post we will look at Jenkins pipeline as code creating Docker images and running integration tests against Docker containers. The series will close with an article explaining how we can move all containers built throughout this series of posts in a Docker swarm environment.

We will be using the following components, tools and techniques:

  • Jenkins master (Jenkins 2.x with its pipeline support)

  • Jenkins agent

  • Sonatype Nexus 3 (for a Docker registry)

  • Docker in Docker (specifically DooD)

  • docker-compose

  • Docker volumes for managing persistent data (pipeline jobs)

  • Docker networking

Finally we will be using this continuous delivery system to run a pipeline for building and testing a Docker image for a simple Spring Boot based web application.

Before we dive in th details let me explain why we use docker-compose. As we are building a set of Docker containers we will likely end up with a rather complex configuration. Running and keeping those containers up to date would require quite some lines of shell scripting code.

docker-compose tries to eliminate shell scripting and provides a YML based format for describing container configuration and dependencies. It further comes with a CLI for fully controlling the life-cycle of images, containers, volumes, etc. docker-compose is well suited for managing multiple environments such as development, testing, etc.

To run all the code yourself just checkout the sample project from GitHub and follow the instructions here. All of the Docker related code for building the infrastructure is located at src/infrastructure/docker.

Add Jenkins master

As build master we are adding Jenkins. Specifically we will build a custom image based off a pre-built image available from Docker Hub. With its support for pipeline as code, as of Jenkins 2.0, it is an ideal candidate for running a fully scripted build pipeline requiring no manual pipeline configuration.

Jenkins Dockerfile
FROM softwarecraftsmen/jenkins-master:2.46.1
COPY *.groovy /usr/share/jenkins/ref/init.groovy.d/

The image is based on softwarecraftsmen/jenkins-master:2.46.1 which is available from Docker Hub. We add a few Groovy script files which will be executed during Jenkins startup. These scripts are responsible for configuring some environment variables and creating credentials object which will be used for accessing a Git repository or pushing to a docker registry. For details just have a look at the source code which is using Jenkins APIs and is pretty straight forward.

Next we are adding the container configuration using docker-compose 's YML format. I’m adding this as file docker-compose-jenkins-master.yml so we have individual components in their own configuration files.


version: '2.1'

    image: softwarecraftsmen/jenkins-master:${JENKINS_TAG}
    build: ./jenkins
    restart: always
      - JAVA_OPTS = "-Djava.awt.headless=true"
      - SCM_USER
      - "${JENKINS_HTTP_PORT}:8080"
      - home:/var/jenkins_home/

    driver: local

The above descriptor configures a service (this is a container) named master. It is built from a Dockerfile located in the sub-directory jenkins. Instead of letting compose name the image we assign a name so we can later even push it to a registry.

We have also expressed the wish that every time the Docker daemon restarts (when the system reboots) the container shall be started too. For making the Jenkins web UI accessible we have bound the container port 8080 to the host’s port expressed by an environment variable JENKINS_HTTP_PORT.

So we see that we can embed placeholders for customizing a container when we create/start it. Any placeholder is resolved against the environment when running one of the docker-compose commands. We even can put all of our configuration related environment variables into a .env file which will be read by docker-compose. The details about how docker-compose works with externalized configuration can be read here.

Environment variables that should be available to a container at runtime are declared in the environment section.

Finally we attach a named volume to the master container’s directory /var/jenkins_home/. Keeping precious data in a volume helps us with having docker-compose ensure that data is not lost when deleting or updating a container.

So, let’s build the Jenkins master image first.

export JENKINS_TAG=2.46.1 (1)
docker-compose -f jenkins-master.yml build (2)

Building master
Step 1/2 : FROM softwarecraftsmen/jenkins-master:2.46.1
 ---> 30b368a2faff
Step 2/2 : COPY *.groovy /usr/share/jenkins/ref/init.groovy.d/
 ---> 52eaf261dc7a
Removing intermediate container 03a85127f38d
Successfully built 52eaf261dc7a
  1. In case later we wanted to push the built image, we need a version tag.

  2. We have to specify the filename unless we named the descriptor docker-compose.yml

Having to add that pesky filename again and again with every docker-compose invocation is not very convenient. So let’s create a .env file and add it there along with some more helpful variables.


This time we can just run docker-compose build. I have described that in more detail in an earlier blog post.

Before we can finally run the Jenkins master, we have to configure the host port to be bound. Also we want to have an initial user account so we can logon to the web UI. We add the variables to the .env file.

  1. This will create admin credentials

  2. This will create admin credentials

To prevent Jenkins from complaining about a bad base URL we also configure its URL from which it as accessible.

Jenkins URL configuration
export JENKINS_URL=http://$(ipconfig getifaddr en0):18080

We are all set now for running the Jenkins master container.

Running the master container
docker-compose up -d (1)
  1. The daemon flag will run the container as a background process.

We can now access the Jenkins web UI at ${JENKINS_URL} and add a new pipeline job for our demo project .

In theory we could actually build the project now on master. But building jobs on master is not a good idea. It is better to delegate build jobs to agents. In the next section we will be building such an agent.

Add Jenkins agent

Building a wide range of projects usually requires us to install tools, libraries and their dependencies to the build execution environment. In order to avoid frequent updates and changes (which may interfere with builds or even cause master downtime) we can choose to run all steps of a pipeline within a Docker container.

So in essence it will be sufficient to maintain only a single agent setup, each project can then provide its custom image with all the tooling setup for its build step execution.

Let’s build such an agent as a Docker container. We will be using a pre-built Docker image built from https://github.com/SoftwareCraftsman/docker-jenkins-swarm-agent. It is using the Jenkins swarm plugin/client to have Jenkins master auto-discover agents.

The agent is configured in jenkins-agent.yml which we will also add to .env.

version: '2.1'

      - "${JENKINS_AGENT_PORT}:50000" (1)
    image: softwarecraftsmen/jenkins-swarm-agent:0.3
    restart: always
    hostname: agent
      - COMMAND_OPTIONS=-master http://master:8080 -username ${JENKINS_ADMIN_USERNAME} -password ${JENKINS_ADMIN_PASSWORD} -labels 'docker' -executors ${JENKINS_AGENT_EXECUTORS} (2)
      - master
    privileged: true
      - /var/run/docker.sock:/var/run/docker.sock (4)
      - ${JENKINS_AGENT_WORKSPACE}:/workspace (5)
  1. Add port binding for the master ←→ agent communication

  2. CLI options for the Jenkins swarm-client. Note the label argument, it assigns the docker label to the agent such we can address pipelines requiring Docker as resource.

  3. The agent needs a filesystem for pipeline build artifacts. This variable contains the full path to a file system location mounted as Docker volume.

  4. The Docker CLI inside the agent container require a socket for communicating with the host docker daemon.

  5. This is a host directory mounted into the container, it provides the filesystem for the agent workspace.

COMPOSE_FILE=jenkins-agent.yml:jenkins-master.yml (1)
  1. We added jenkins-agent.yml.

The new container adds a few more items to the environment

JENKINS_AGENT_WORKSPACE=/Volumes/Disk/Development/spaces/software-craftsmen/continuousdelivery/jenkins-agent (3)
  1. This is the port master and agent use for their communication.

  2. The number of job executors the agent provides.

  3. The filesystem used by the agent for build artifacts.

Updating the containers
docker-compose up -d

Updating the containers has recreated the master container (as we added to the environment) and has created a new agent container. We can get a status of the running containers with docker-compose ps.

If needed we could scale the agent to have more agent instances up and running with a simple command such as docker-compose scale agent=2. .Output of docker-compose ps after scaling the agent

docker-compose ps
         Name                        Command               State                         Ports
infrastructure_agent_1    docker-entrypoint.sh /run.sh     Up
infrastructure_agent_2    docker-entrypoint.sh /run.sh     Up
infrastructure_master_1   /bin/tini -- /usr/local/bi ...   Up>50000/tcp,>8080/tcp

However scaling agents on a single Docker host doesn’t really make sense. It would be more useful if we had a cluster of Docker nodes managed by Docker swarm.

At that point we should already be able to run our pipeline and build a Docker image for our demo application and run integration tests. So lets open the Jenkins UI at http://localhost:8080 and add a build pipeline. As job type we are selecting pipeline and we use the Pipeline script from SCM option which will checkout a pipeline DSL based build script. The Git repository is https://github.com/SoftwareCraftsmen/continuousdelivery.git and the script path is jenkins/Jenkinsfile.groovy.

We can start the job, it will build the image, but unfortunately it fails since it tries to push the built image to a Docker registry which we haven’t added yet.

Add Nexus

Nexus is added as repository manager and Docker registry. It is configured in jenkins-agent.yml which we will also add to .env.

version: "2.1"

    image: sonatype/nexus3:${NEXUS_TAG}
      - ${NEXUS_HTTP_PORT}:8081
    restart: always
      - data:/nexus-data

  data: {}
COMPOSE_FILE=nexus.yml:jenkins-agent.yml:jenkins-master.yml (1)
  1. We added nexus.yml.

I have separated the Nexus configuration so we can have a plain artifact manager for deploying JAR, WAR, etc. but can optionally also add a Docker registry. So we are adding one more compose file configuring the Docker registry.

version: "2.1"

      - ${NEXUS_DOCKER_REGISTRY_PORT}:5000 (1)
      - docker-data:/nexus-docker-data (2)

  docker-data: {}
  1. Bind a host port where the Nexus Docker registry connector will be available

  2. Add a volume for the Docker registry blob store

COMPOSE_FILE=nexus-docker-registry.yml:nexus.yml:jenkins-agent.yml:jenkins-master.yml (1)
  1. We added nexus-docker-registry.yml.

Before we can update the containers we have to complete the environment and add a few more variables.

Updating the containers
docker-compose up -d

We are able now to access the Nexus web UI which we use to complete the Nexus configuration. We have to setup the Docker registry connector. This is one of the steps that I haven’t yet scripted using the Nexus script support.

The following items have to be added in order to have a Docker registry running within Nexus

Table 1. Setup blob store with
Field Value







Table 2. Create Docker repository and connector with
Field Value






HTTP, port 5000

Blob store


Deployment policy

Disable redeploy

We disallow redeploy to avoid Docker latest style image tags.

Earlier we had configured the environment variable DOCKER_REGISTRY which will be used by the pipeline to address the Docker registry. We had set its value as docker-registry:15000. So we add a DNS entry for resolving docker-registry to be more flexible as we eventually move the containers around.

Add docker-registry to /etc/hosts docker-registry

We are not using SSL to secure the connector, so we have to register the Docker registry to the Docker daemon as unsecure registry.

Docker daemon configuration

Depending on your Docker installation you will have to use different approaches to setup the daemon. For Docker for Mac/Windows you will have to use their preferences Dialog. When using straight Docker on a Linux host please read up at https://docs.docker.com/registry/insecure/.

Finally we have to fix permissions for the Docker registry blob of Nexus.

Fixing Nexus blob permissions
docker-compose exec --user root repository bash -c "chown -R nexus:nexus /nexus-docker-data/"

We are almost there, once more we update the containers and have our infrastructure up and running

Updating the containers
docker-compose up -d


We are now able to successfully run the pipeline, it will build a Docker image, run an application, run some integration tests and on success push the Docker image to the Docker registry.

Given that we have spun up quite complex server containers, I have demonstrated that using docker-compose this task gets quite simple. We have a build infrastructure that is fully versionable, can be run on a production system but also on some development hardware for example for developing and testing build scripts.

In the next blog post I will show how to setup the pipeline using the Jenkins pipeline DSL.

Continuous Delivery