Think inside the box — CI/CD with Docker containers

We live in times where continuous integration and continuous deployment are de-facto standards in the development of bigger or even medium-sized applications. In the same time, there are visible tendencies to divide application into small components that run more or less independently of each other. Those components are called microservices and they tend to run inside Docker containers. Now, imagine connecting these two trends by setting up CI/CD with a help of Docker. This entry is precisely about that! I will try to document how we have managed to setup continuous integration and deployment for Spring Boot app, with usage of Jenkins, Nexus and SonarQube.

Note: I do not consider myself as an expert in DevOps, not even a beginner — this post is solely for documentation purposes and probably only those who are completely new to Docker or CI setups may find is useful.

What is a Docker container?

The easiest way to explain what a Docker container is, is to describe what is allows you to do. With the help of Docker, you can create virtual environments that run your apps and are separated from your system. Experience with using them is similar to usage of virtual machines, but people often get triggered when you compare containers to virtual machines, so please remember that those are two different beasts! If you want to hear more detailed explanations what a Docker container is, you may want to watch LiveOverflow’s YouTube tutorial on that. For this post, it suffices to say that containers are environments for your app that are separated from the host environment.

Our goal

We have a BitBucket repository with backend code and a separate one with frontend code. Now, we want to setup a system that will pull new code from those repositories when triggered, build and test it, run static analysis and deploy if it works. What tools are we going to need?

To setup and run pipelines like the one described above, we are going to use Jenkins. SonarQube will be used for static analysis of code and application builds will be pushed to a Nexus repository. All of those services will run inside separate containers and app will also be deployed to dedicated container. Containers everywhere!

Setting up Jenkins

Let us start with probably the most important part of our setup, that is the Jenkins. After quick DuckDuckGo search, you will probably find Jenkins image on DockerHub. To start running it, you can simply run following command:

docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts-jdk11

What it does is, it creates and runs container from the Jenkins image with tag lts-jdk11 and forwards appropriate ports to host machine, so you can access them (note that you can map native Jenkins port to a different one by writing 8082:8080 or something like that) and uses Docker volume named jenkins_home. It is REALLY important to use a Docker volume, if you want to keep your Jenkins data persistent. Without the volume you would lose all Jenkins data when the container would stop running, which is undesirable.

If you go along and run this command, you should see that Jenkins is running, and you can access it through localhost:8080:

Now, you can go back to the terminal and copy the initial admin password to unlock Jenkins. Then, you will be asked to select extensions you wish to install and to create an account. Recommended extensions are fine for now.

Once we are done with the basic stuff, we can create a pipeline for our project. To do this, click New item, select Multibranch pipeline (we want to have separate configs for different git branches), enter name for your project and fill out the details:

  • Branch Sources: git
  • Paste git repository URL in Project Repository
  • Behaviors -> Discover Branches -> Add -> Filter by name (regular expressions)

You may leave rest as it is. Please set up on what branches you want to run Jenkins pipeline.

Once you click “Save” you will probably notice that Jenkins will start scanning the repository for a file named Jenkinsfile. This is because we set this whole thing up in such a way, that Jenkins will read what it should do from this file. That way, we can have different Jenkinsfiles on each branch of our repository, so that Jenkins will behave different on different branches.

Ok cool, so Jenkins is running but you have probably noticed that you cannot close the terminal in which we started Jenkins, because it would immediately close it. To solve this problem, we can run the container in the detached state by using -d flag. Oh, and once we are at modifying the Docker command, let’s also add a name for our container. So, stop Jenkins in the terminal and run this:

docker run -d --name jenkins -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts-jdk11

As you can see, you can freely use your terminal now and Jenkins is just fine (notice that thanks to the volume, we did not lose the config)! But is it really running? To see what containers are up, you can enter docker ps. You should see the names of your containers, what images they are using and exposed ports. Neat stuff!

Basic Jenkinsfile

As I mentioned above, Jenkins will try to read file named Jenkinsfile from each scanned branch in our repository and do whatever this file tells him to do. So, let us prepare a simple Jenkinsfile:

pipeline {
agent any

stages {
stage('Build') {
steps {
script {
echo 'Build...'
}
}
}

stage('Test') {
steps {
script {
echo 'Test...'
}
}
}

stage('SonarQube analysis') {
steps {
script {
echo 'Running SonarQube...'
}
}
}

stage('Publish') {
when {
branch 'master'
}
steps {
script {
echo 'Publish to Nexus...'
}
}
}

stage('Release') {
when {
branch 'master'
}
steps {
script {
echo 'Release...'
}
}
}

stage('Deploy') {
when {
branch 'master'
}
steps {
script {
echo 'Deploy...'
}
}
}
}
}

I think that this setup is pretty much self explanatory: we create a pipeline with couple of stages, where the final ones will only be executed on master branch. Of course, this file will be modified later, but you can add this file to the repository, run the pipeline and see that the stages we wanted to be executed will run:

Setting up Nexus and SonarQube — Docker Compose

Now that we have Jenkins up and running, let us create containers for the rest of services we are going to use. You may be tempted to search for Docker images for those services and run them just like we started Jenkins, but that is not such a great idea (imagine that you would have to do this every time something breaks). There is a better way to start couple of containers, and this way requires us to learn something about Docker Compose!

Docker Compose provides us an easy way to setup and run more then one container at once, where configuration for all those containers can be stored in one file with a nice, human friendly YAML format. This file is named docker-compose.yml. Composing containers is specially useful in situations where we want to setup containers that depend on each other and would not work separately. And we are precisely in this situation, because Jenkins should be able to communicate with Nexus and SonarQube in order to publish artifacts and send analysis request. To make this even possible, all of this container should be in the same network. Let me show you how it looks in the configuration file (docker-compose.yml):

version: '3'
services:
nexus:
image: sonatype/nexus3:3.0.0
volumes:
- nexus_data:/sonatype-work
ports:
- 8081:8081
networks:
- mynetwork
sonarqube:
image: sonarqube:lts
container_name: sonarqube
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
ports:
- 9000:9000
networks:
- mynetwork
jenkins:
build: .
container_name: jenkins
user: root
ports:
- 8082:8080
- 50000:50000
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- mynetwork
networks:
mynetwork:
volumes:
jenkins_home:
driver: local
name: jenkins_home
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
nexus_data:

As you can see, we can describe multiple containers in one configuration file. I will not explain all the details, but here is a quick overview. Below services you list all the containers you want to setup (notice that all container are assigned to a common network named mynetwork and that we setup volumes to keep our data persistent), networks is used to list all the networks we want to setup. Same thing with volumes.

If you are good at pattern matching, you can probably see that all of our container have an image property set up, but Jenkins does not have one and uses build: . instead. What is up with that?

It turns out, that you can create your own Docker images and what we are saying there, is to build a custom image from build instructions located in the same directory as the docker-compose file. This file is named Dockerfile and looks like this:

FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt update && apt install -y docker.io
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt

In this case, we are creating our custom image for Jenkins, based on the previous one used, but we install Docker in it and setup plugins for Jenkins so we do not have to do it manually every time something fails. Very convenient!

Now that we have a better setup for all of our containers, let us stop the old Jenkins container that is probably still running. Assuming you have named your container jenkins you can run following commands:

docker container stop jenkins
docker container rm jenkins

To start our brand new composition of containers, run following command in the directory with docker-compose.yml and Dockerfile files:

docker-compose build && docker-compose up -d

This command will build an image described in Docerfile and then start all containers in detached state.

Type docker ps to see that all container are indeed up and running (ignore additional ones running on my server):

Last steps — Deploy to the container

With all the containers being up and running, we can come back to Jenkins and update our pipeline to make it actually do something. But, before that if you want to use SonarQube in Jenkins, you should install additional plugin for it. Search for SonarQube Scanner in Jenkins plugin manager to install it. Also, if you want to run some Jenkins commands inside temporary Docker containers (for example to build front-end app, you can run npm commands inside node:14 image), install Docker Pipeline and Docker plugin.

Now that we have all additional plugins installed, we can configure SonarQube in Jenkins. Navigate to Manage Jenkins then click on Configure System and find SonarQube Servers there. Add new installation with name SonarQube and following address: http://sonarqube:9000. Noticed that we are not passing any IP addresses but using the container's name from docker-compose file instead?

At this stage you may want to configure SonarQube and/or Nexus by yourself. Assuming that you have done what you need, let us proceed and prepare Jenkinsfile that actually does something:

NOTE: I will not explain everything in this pipeline. You can see that we have prepared credentials in Jenkins, so that it can push new tags to the repository.

pipeline {
agent any
environment {
BITBUCKET_COMMON_CREDS = credentials('jenkins-bitbucket-common-creds')
}
options {
// This is required if you want to clean before build
skipDefaultCheckout(true)
}

stages {
stage('Build') {
steps {
script {
cleanWs()
checkout scm
sh 'git fetch --all --tags'
sh 'git tag'
echo 'Build...'
dir("backend") {
sh './gradlew clean build -x test'
}
}
}
}

stage('Test') {
steps {
script {
echo 'Test...'
dir("backend") {
sh './gradlew test'
}
}
}
}

stage('SonarQube analysis') {
steps {
withSonarQubeEnv('SonarQube') {
dir("backend") {
sh './gradlew sonarqube'
}
}
}
}

stage('Publish') {
when {
branch 'master'
}
steps {
script {
echo 'Publish to Nexus...'
dir("backend") {
sh './gradlew uploadArchives'
}
}
}
}

stage('Release') {
when {
branch 'master'
}
steps {
script {
echo 'Release...'
dir("backend") {
sh './gradlew release -Prelease.disableChecks -Prelease.pushTagsOnly -Prelease.customUsername=$BITBUCKET_COMMON_CREDS_USR -Prelease.customPassword=$BITBUCKET_COMMON_CREDS_PSW'
}
}
}
}

stage('Deploy') {
when {
branch 'master'
}
steps {
script {
echo 'Deploy...'
try {
sh 'docker container stop backend'
sh 'docker container rm backend'
sh 'docker image rm backend-image:latest'
}
catch(err) {
echo '${err}'
}
sh 'chmod +x ./deploy.sh'
sh './deploy.sh'
}
}
}

}
}

As you can see, this pipeline mainly runs appropriate Gradle tasks. I am going to focus on the deploy stage, as it is more interesting.

This may seem weird, but what we are doing in the deploy stage, is to stop the container that is running previous version and building a new one inside a Docker container. So yes, Jenkins that runs inside the container is creating and running a new container with the deployed app. That’s a true inception!

Let me show you what the deployment script does:

#!/bin/bash

docker build --rm -t backend-image \
--build-arg SPRING_DATA_MONGODB_HOST=$SPRING_DATA_MONGODB_HOST \
--build-arg SPRING_DATA_MONGODB_PASSWORD=$SPRING_DATA_MONGODB_PASSWORD \
--build-arg SPRING_DATA_MONGODB_PORT=$SPRING_DATA_MONGODB_PORT \
--build-arg SPRING_DATA_MONGODB_USERNAME=$SPRING_DATA_MONGODB_USERNAME \
--build-arg SPRING_DATA_MONGODB_DATABASE=$SPRING_DATA_MONGODB_DATABASE \
--build-arg SWIFT_STORAGE_PWD=$SWIFT_STORAGE_PWD \
--build-arg SWIFT_STORAGE_URL=$SWIFT_STORAGE_URL \
--build-arg SWIFT_STORAGE_USER=$SWIFT_STORAGE_USER \
--build-arg JWT_KEYSTORE_PWD=$JWT_KEYSTORE_PWD \
-f ./Dockerfile .
docker run -it --init -d -p 8095:8095 --name backend backend-image

And according Dockerfile:

FROM adoptopenjdk/openjdk11
COPY . /home/gradle/bluedrive
WORKDIR /home/gradle/bluedrive/backend
RUN /home/gradle/bluedrive/backend/gradlew clean build -x test --no-daemon --warning-mode all
EXPOSE 8080
RUN mkdir /app
RUN cp /home/gradle/bluedrive/backend/build/libs/*.jar /app/blue-drive.jar

ARG SPRING_DATA_MONGODB_HOST
ARG SPRING_DATA_MONGODB_PASSWORD
ARG SPRING_DATA_MONGODB_PORT
ARG SPRING_DATA_MONGODB_USERNAME
ARG SPRING_DATA_MONGODB_DATABASE
ARG SWIFT_STORAGE_PWD
ARG SWIFT_STORAGE_URL
ARG SWIFT_STORAGE_USER
ARG JWT_KEYSTORE_PWD

ENV SPRING_DATA_MONGODB_HOST=$SPRING_DATA_MONGODB_HOST
ENV SPRING_DATA_MONGODB_PASSWORD=$SPRING_DATA_MONGODB_PASSWORD
ENV SPRING_DATA_MONGODB_PORT=$SPRING_DATA_MONGODB_PORT
ENV SPRING_DATA_MONGODB_USERNAME=$SPRING_DATA_MONGODB_USERNAME
ENV SPRING_DATA_MONGODB_DATABASE=$SPRING_DATA_MONGODB_DATABASE
ENV SWIFT_STORAGE_PWD=$SWIFT_STORAGE_PWD
ENV SWIFT_STORAGE_URL=$SWIFT_STORAGE_URL
ENV SWIFT_STORAGE_USER=$SWIFT_STORAGE_USER
ENV JWT_KEYSTORE_PWD=$JWT_KEYSTORE_PWD

ENTRYPOINT ["java", "-jar","/app/blue-drive.jar"]

We are basically building the app using Gradle and setting up the environment variables that it uses. Notice that we need to pass the environment variables through docker-compose build arguments, which is a little bit inconvenient. I wish there was a way to directly access environment variables’ values in Dockerfile…

Given the fact that we are running this script inside Jenkins, we need to provide it with all the environment variables that this script requires. This can be done in the same place we were configuring Jenkins for SonarQube.

So this is it! We now have a complete setup powered by Docker containers!

Summary and cheatsheet

Things to remember from this post:

  • Docker is cool and easy
  • Use volumes to keep your data persistent!
  • Connect containers that will speak to each other to the same network
  • If you want your container to access your host network, connect it with network named host (ex.docker network connect host jenkins)
  • Use docker-compose to setup multiple containers
  • Dockerfiles are used to describe your custom Docker image
  • Refer to other containers by their service name from docker-compose file, not by their IP addresses

If you want to debug a container you may bash into it:

docker exec -it <container_name> bash

This way you can you run terminal commands inside docker and for example check if it can comunicate with other container:

docker exec -it jenkins bash
> curl mongo:27017

Basic docker entities:

  • Containers — you know what they are
  • Images — bases for the containers; multiple containers can run on the same image
  • Networks — used to connect containers with each other
  • Volumes — virtual drives stored on your host filesystem, that are being mounted by container; makes container data persistent

General Docker usage:

docker <entity> <command> <name>

Examples:

  • docker network ls - list available networks
  • docker network inspect mynetwork- print info about particular network
  • docker container stop jenkins - stop container named jenkins

If you can’t remember some command, you may write just docker network and you will be given all the commands available. Generally, docker CLI has a really good help.

Junior of Computer Science at Warsaw University of Technology, Faculty of Electronics and Information Technology. Interests: Computer Graphics & ML