Hello,
We are using docker 1.12 version which is stable release of redhat and we are planning to set up docker containers as jenkins slaves to run the builds.
After a research i found enabling docker-api listening to a port and make listen to jenkins server.
Do we have any better solutions to create jenkins slaves using docker, if using api is best way do i need to use any certs to make secure if yes how do i do that?
Thanks for the reply, How do i connect docker daemon to jenkins master let me put in better way i need docker containers to be used as slaves. How to establish connection from jenkins master to docker container?
Yes, I have created a slave on rhel7 host and also windows machine. on Rhel7 i have installed JDK and created a user jenkins with sudo permissions and connected to master via ssh that all works for me.
But in this case we are using docker to run the builds .
what is docker-api?
The Engine API is an HTTP API served by Docker Engine. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API.
oh, that is send docker commands to the jenkins agent, and IT starts containers on the slave…
1st and 3rd model has the jenkins master start containers on some other (worker) system , issuing docker commands.
the containers start and stop with each job. the container processes 1 build, then dies
2nd, is start some number of slaves in advance using jenkins slave docker image
it is a slave like normal and pulls jobs to execute
in either case, SOMEONE needs to build images with all the proper build tools for the different jobs… same as always.
the docker plugin makes it clear which image is being used for which build type.
the run a build agent, let it take jobs is exactly like it is today, except you could run multiple agents on a system where you can’t with normal jenkins slave
i think the 1st puts the responsibility of the dev team (part of jenkins job)
mine has it on the jenkins admin. (start more slaves of type x)…
one last question could you explain how 1st approach is more on dev side and 3rd i.e your approach is on jenkins admin side with some basic day to day example…
So that i can present in a better way, sry for to many questions
1st approach has steps in the build job. Usually setup by developers.
The last is new build agents. Which is an admin job in all the places I have been. Then the build is pointed at one of those systems. U want a new Java version on that agent, then system guy has to install it
Personally I prefer the ephemeral slave approach which is th one that the Riotgames set of posts describe. We use the YADP plug-in which takes care of the slave container life-cycle. Other than that your existing pipeline jobs will run unchanged. You can create as many different flavours of slave build image as you need (so you can optimise for particular tool-chains). You can run as many slaves as you need and when your job is done the slave container just disappears and all resources are reclaimed. Scaling out is also straightforward, you just configure multiple YADP clouds or you can hook up Docker swarm as a scheduler. TBH both approaches have merit, but for me the highly disposable and temporal nature of the ephemeral container fits more naturally with docker.