Continuous delivery infrastructure as code

10 May 2017, Martin Ahrer

This is part 1 of a series of posts covering Docker in a Continuous Delivery environment.

Today I’m showing how simple it is to setup a continuous delivery build pipeline infrastructure using Docker. In an upcoming post we will look at Jenkins pipeline as code creating Docker images and running integration tests against Docker containers. The series will close with an article explaining how we can move all containers built throughout this series of posts in a Docker swarm environment.

We will be using the following components, tools and techniques:

  • Jenkins master (Jenkins 2.x with its pipeline support)

  • Jenkins agent

  • Sonatype Nexus 3 (for a Docker registry)

  • Docker in Docker (specifically DooD)

  • docker-compose

  • Docker volumes for managing persistent data (pipeline jobs)

  • Docker networking

Finally we will be using this continuous delivery system to run a pipeline for building and testing a Docker image for a simple Spring Boot based web application.

Before we dive in th details let me explain why we use docker-compose. As we are building a set of Docker containers we will likely end up with a rather complex configuration. Running and keeping those containers up to date would require quite some lines of shell scripting code.

docker-compose tries to eliminate shell scripting and provides a YML based format for describing container configuration and dependencies. It further comes with a CLI for fully controlling the life-cycle of images, containers, volumes, etc. docker-compose is well suited for managing multiple environments such as development, testing, etc.

To run all the code yourself just checkout the sample project from GitHub and follow the instructions here. All of the Docker related code for building the infrastructure is located at src/infrastructure/docker.

docker-compose best practices (part 2)

07 February 2017, Martin Ahrer

This is part 2 of a series of blog posts about docker-compose. This time we look at managing large numbers of compose projects.

When building complex infrastructure using docker-compose we soon end in a mess of scripts for starting, updating, etc. containers. I will try to describe an approach that has helped to get this done in a very structured way.

docker-compose best practices (part 1)

06 February 2017, Martin Ahrer

In this blog post we are looking into how we can create modular compose projects.

With docker-compose we can describe a bunch of containers and container related resources such as networks and volumes that make up an application. All this is usually going into a docker-compose.yml file.

As an application grows complex its worth to consider modularizing compose descriptors. Instead of stuffing each and every item into docker-compose.yml we can split out individual containers. This gives us the flexibility to build optional containers that we just load in certain environments. Or we just do it to manage complexity just like we do it with ordinary source code.

Let’s look at some real scenario. We have to run a Jenkins build server made up from a master and an agent. Below we find a typical compose descriptor which can get really big as the number of containers it is describing is growing.

Bean Mapping of Transfer Objects

08 March 2016, Martin Ahrer

In the past years I have been working on multiple projects where the so-called Data Transfer Object (short DTO) pattern has been heavily used. This is a pattern that has even been a core pattern in the JEE world. This pattern certainly has its justification for the right cases. But in many cases I have seen it applied inappropriately. This blog posting by Adam Bien, a JEE advocate, is outlining the case where it should be considered useful. However when applied, this pattern comes at the cost of additional code to maintain and some extra CPU cycles for doing the mapping.

In this post we take a brief look at some mapping frameworks (just enough to do simple bean mapping). Finally we do some simple benchmarking just to get an idea what the performance costs of bean mapping are.


Older posts are available in the archive.