Docker Community Forums

Share and learn in the Docker community.

Create a Jenkins pipeline to build java app then docker-compose up

Hello everyone,

I’m relatively new to docker and, after an initial period of try-error and other frustrating steps, i’m now quite confident in building images with Dockerfile and orchestrate two or more Dockerfile with docker-compose.

I’m here to try to understand, on high level, if what i’m trying to achieve is, in fact, doable or not:

I want to create a pipeline with Jenkins which retrieve my java code from a git repo, build, test and package the application then run the docker-compose.yaml file that orchestrate the containerization of my app in combo with a mysql custom Dockerfile.

What i did so far is to have a reliable docker-compose.yaml file which builds images from Dockerfile then run those containers togheter. Everything works as expected.

So far i’ve a cloud server with only docker running on it; docker has two “major” (pardon my terminology) container always running: one with Traefik (which acts as reverse-proxy server) and another with Jenkins. Everytime i need to push up a new application on the cloud (mainly springboot jar application on a container, mysql on another both togheter organized by a docker-compose file) i copy-paste the jar file and run the properly configured docker-compose file.

Is it feasible? I mean, speaking on high levels, is it actually possible to say to jenkins (or any other CI tool) to run the docker-compose after that he packaged the application and run all those containers on the same docker host? (so traefik can easily route the traffic?)

Hope it’s clear enough to open a discussion
Many thanks to all of you
Best regards
Luca

PS
I’ve specified the technology i’m using just to make everything more clear to all of you but my question is not strictly related to the specified stack

great nice post… i really like your fourm wounder full

This is a long process; our goal is to completely automate all these things. This repo contains files and configuration details which will be used while creating the image. Once the image is created and run, we have:

User admin/admin created
Plugins installed
Credentials for Github and Docker are created
Pipeline job with name sample-maven-job is created.
If we check out the source code and do a tree, we can see the below structure,

jagadishmanchala@Jagadish-Local:/Volumes/Work$ tree jenkins-complete/
jenkins-complete/
├── Dockerfile
├── README.md
├── credentials.xml
├── default-user.groovy
├── executors.groovy
├── install-plugins.sh
├── sample-maven-job_config.xml
├── create-credential.groovy
└── trigger-job.sh
Let’s see what each file talks about

default-user.groovy - this is the file that creates the default user admin/admin.

executors.groovy - this is the Groovy script that sets the executors in the Jenkins server with value 5. A Jenkins executor can be treated as a single process which allow a Jenkins job to run on a respective slave/agent machine.

create-credential.groovy - Groovy script for creating credentials in the Jenkins global store. This file can be used to create any credential in the Jenkins global store. This file is used to create Docker hub credentials. We need to change the username and secret entries in the file by adding our Docker hub username and password. This file will be copied to the image and ran when the server starts up

credentials.xml - this is the XML file which will contain our credentials. This file contains credentials for both Github and Docker. The credential looks like this:

<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
GLOBAL
github
github
jagadish***
{AQAAABAAAAAQoj3DDFSH1******
</com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>

If you see the above snippet, we have the id as “github,” username and encrypted password. The id is very important as we will be using this in the pipeline job that we create.