Use Jenkins slaves in docker containers on Rhel7 hosts

Hello,
We are using docker 1.12 version which is stable release of redhat and we are planning to set up docker containers as jenkins slaves to run the builds.

After a research i found enabling docker-api listening to a port and make listen to jenkins server.

Do we have any better solutions to create jenkins slaves using docker, if using api is best way do i need to use any certs to make secure if yes how do i do that?

i don’t understand… in normal jenkins slave there is a java application that starts and runs and connects to the jenkins master waiting for commands…

that jenkins slave should work in a container the same way…

Thanks for the reply, How do i connect docker daemon to jenkins master let me put in better way i need docker containers to be used as slaves. How to establish connection from jenkins master to docker container?

some thing like below link:

I have proposed to enable docker-api with a tcp port and make it connect to jenkins to connect to docker daemon

what is ‘docker-api’?

have you ever setup jenkins slaves on different machine before?
the linked to article does BOTH MASTER AND SLAVE in docker.

in this example, the jenkins master to launch a container to do builds…

vs having a container always running (like a normal slave) that takes builds (like this)


and there is already an image built to do this

which do you want?

Yes, I have created a slave on rhel7 host and also windows machine. on Rhel7 i have installed JDK and created a user jenkins with sudo permissions and connected to master via ssh that all works for me.

But in this case we are using docker to run the builds .

what is docker-api?
The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API.


The one which you have sent looks good

oh, that is send docker commands to the jenkins agent, and IT starts containers on the slave…

1st and 3rd model has the jenkins master start containers on some other (worker) system , issuing docker commands.
the containers start and stop with each job. the container processes 1 build, then dies

2nd, is start some number of slaves in advance using jenkins slave docker image
it is a slave like normal and pulls jobs to execute

what is the best approach to suggest for team considering every thing in mind

‘best’? it depends.

in either case, SOMEONE needs to build images with all the proper build tools for the different jobs… same as always.

the docker plugin makes it clear which image is being used for which build type.
the run a build agent, let it take jobs is exactly like it is today, except you could run multiple agents on a system where you can’t with normal jenkins slave

i think the 1st puts the responsibility of the dev team (part of jenkins job)
mine has it on the jenkins admin. (start more slaves of type x)…

Thanks for the detailed explanation, To my understanding you are using the below link provided
method as jenkins slaves?

yes, correct… normal jenkins slaves, but running in a container, started in advance

you will need to extend the container by installing the build tools, just like you would normally

Yep, This looks good I will present the 2 options to them, Just for info we have 4 hosts dedicated to slaves so you approach should be fine i guess.

Is the set up is straight forward or did you face any issues to setup

straight forward… you can have a slave running with just a docker run command

you define the slave at the jenkins master, get the id string and then start the container and pass the id string as a parameter

Cool, can i run more than 1 slave on the same host with your approach will that make sense?

Yes. Of course assuming there is enough cpu, memory and disk. Each slave can still have multiple executors

one last question could you explain how 1st approach is more on dev side and 3rd i.e your approach is on jenkins admin side with some basic day to day example…

So that i can present in a better way, sry for to many questions

1st approach has steps in the build job. Usually setup by developers.

The last is new build agents. Which is an admin job in all the places I have been. Then the build is pointed at one of those systems. U want a new Java version on that agent, then system guy has to install it

1 Like

Personally I prefer the ephemeral slave approach which is th one that the Riotgames set of posts describe. We use the YADP plug-in which takes care of the slave container life-cycle. Other than that your existing pipeline jobs will run unchanged. You can create as many different flavours of slave build image as you need (so you can optimise for particular tool-chains). You can run as many slaves as you need and when your job is done the slave container just disappears and all resources are reclaimed. Scaling out is also straightforward, you just configure multiple YADP clouds or you can hook up Docker swarm as a scheduler. TBH both approaches have merit, but for me the highly disposable and temporal nature of the ephemeral container fits more naturally with docker.