Docker Community Forums

Share and learn in the Docker community.

Using docker in a dockerized Jenkins container

(Rogier Lommers) #1

The topic name sounds a bit silly, but this is what I want:

  • I’m using a quiet simple Dokerfile, based on the official Jenkins dockerfile.

    FROM jenkins
    USER root
    RUN apt-get update && apt-get install -y && rm -rf /var/lib/apt/lists/*
    user jenkins

It’s just the basic Jenkins container, with the installation of added. After the container starts, I have a fine working Jenkins environment. Now I want that Jenkins environment to monitor a GIT repository for changes. Once a change occurs, the Jenkins environment should start a “docker build” and a “docker run”. In other words: Jenkins should start a docker container which will be used for compiling the source code.

The problem I’m facing is that the Docker daemon is not started inside my initial dockerfile. So my rather simple question is: how can I change the above mentioned docker file so it starts/runs the Docker daemon before starting the Jenkins instance. (click here for the official dockerfile used).

Just some more clarification, what I want is: when I run “ps aux | grep docker” inside the container, I want to see the docker daemon running.

(Sven Dowideit) #2

You have at least 2 options.

Docker in Docker is doable, or you could give your jenkins access to your Docker socket, either by using a tcp based socket, or (better imo) by giving it the unix socket using a bind mount.


docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) ubuntu bash

will let your container run the docker client, talking to the default unix socket :slight_smile: and you can script from there.
I note that suggests the insecure option of making the raw socket available on all network interfaces, so that could do with updating.

mmm, and then there’s - the official jenkins Docker image, maybe something can be added to it :slight_smile:

(Michael Neale) #3

Yes - like @sven said - bind mount in the docker socket - and then ensure the docker (client) is installed - and you should be good to go. You can use docker run, build etc from freestyle build steps. You can also use the docker build and publish plugin, and probably a few others (the docker plugin - as mentioned - does talk via http which is unfortunate at this time). but it sounds like you only want to run docker commands - so Svens solution is what I use myself (I am one of the maintainers of official jenkins image)

(Rogier Lommers) #4

Wow, thanks! Didn’t know that sockets could be mounted. It looks like it works almost fine now. I’m starting the Jenkins container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -p 8080:8080 -v /smb/jenkins_home:/var/jenkins_home jenkins-docker

Jenkins is running and “sees” a change in the repository. It thens tries to build and run a docker container by using the binded Docker socket. Unfortunately I receive an error:

docker: error while loading shared libraries: cannot open shared object file: No such file or directory

Some more ideas? I’m using the following versions:

[dev-server-rogier][~] → docker version
Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

(Colin LEVERGER) #5

Hello all,

I have exactly the same issue and get the same error, with the latest version of Docker (on Debian 8.0):

Did you finally find a solution to this silly message?


(Colin LEVERGER) #6

I finally found a solution, which is a hack, but working well actually.

What you can do is run docker from SSH, using a user with sudo rights (if you need to restart / create / update a contener etc…)

If you are interested in this solution, you should take a look at : and follow the procedure…

It’s working in my case for website automatic deployment, triggered by push / changes on GitHub…

Enjoy :smile:

(Erick Daniszewski) #7

some discussion on why the error occurs:

the work-around I used to get this working was to install lxc within the dockerfile for the container being run

RUN apt-get install lxc

this should make available within the container

(Krasi) #8

In general it is better to use only a docker client inside a container and mount the /var/run/docker.sock so you can run command on the docker daemon installed on the host itself.

there is a docker image on the hub for this

and also you might useful this step by step tutorial I did for jenkins and docker pipeline

(Huangyanxiong2012) #9

how resolve this error

docker: error while loading shared libraries: cannot open shared object file: No such file or directory

(Ponchofiesta) #10

I just set up Jenkins in a container with further containers started by Jenkins. I installed the whole docker packages into the Jenkins container. I’m pretty sure it is not all needed as the docker engine runs outside of this container. But this way the docker command works.

The essential parts from my Dockerfile:

FROM jenkins/jenkins:2.73.2

# install docker, docker-compose, docker-machine
# see:
# see:
# see:

USER root

# prerequisites for docker
RUN apt-get update \
    && apt-get -y install \
        apt-transport-https \
        ca-certificates \
        curl \

# docker repos
RUN curl -fsSL | apt-key add - \
    && echo "deb [arch=amd64] xenial stable" >> /etc/apt/sources.list.d/additional-repositories.list \
    && echo "deb xenial main restricted universe multiverse" >> /etc/apt/sources.list.d/official-package-repositories.list \
    && apt-key adv --keyserver --recv-keys 437D05B5 \
    && apt-get update

# docker
RUN apt-get -y install docker-ce

# docker-compose
RUN curl -L`uname -s`-`uname -m` -o /usr/local/bin/docker-compose \
    && chmod +x /usr/local/bin/docker-compose

# give jenkins docker rights
RUN usermod -aG docker jenkins

USER jenkins

And add volume /var/run/docker.sock:/var/run/docker.sock. I do it via docker-compose.yml. But keep in mind that all docker containers are running directly on the host, not inside the Jenkins container. So you will get some problems with path names if you use further volumes.

(Douglas Wagner) #11

So, really, unless you’re trying to run Docker-in-Docker (it works, but is not likely the recommended thing to do for just accessing the docker client) your best bet is to install the docker CLIENT binaries into your container instead of the entire docker server stack.

the “Run” Stanza for the Dockerfile:

ARG DOCKER_CLIENT=“docker-17.06.2-ce.tgz”

RUN cd /tmp/
&& curl -sSL -O${DOCKER_CLIENT} \
&& tar zxf ${DOCKER_CLIENT} \
&& mkdir -p /usr/local/bin \
&& mv ./docker/docker /usr/local/bin \
&& chmod +x /usr/local/bin/docker \
&& rm -rf /tmp/*

All in one step so you don’t bloat your dockerfile. This keeps only the docker client binary and drops all of the rest of the cruft out of the package. Because it’s being pulled from Tar, you’re not going to get any missing library issues since it’ll be statically linked. Of further benefit: This works in any OS, not just Ubuntu…meaning if you ever change the base OS your Jenkins build is built off of (or have a corporate standard and roll your own, or use the alpine version instead of the ubuntu version, or…) you don’t have to change the way you run Jenkins, Docker or the client utilities.

As a second recommendation, if security is a thing for you, I REALLY recommend using the TCP Network API, secured through TLS, and just baking the certificates into your Jenkins installation. You can then set “ENV DOCKER_HOST=tcp://docker_host:2376” in your dockerfile. Why? The /var/lib/docker/docker.sock is pretty much root access to your entire docker cluster…not just the server that socket file is running on, but if you’re running swarm it gives you access to ALL systems on the ENTIRE swarm. If any server is broken or a container is compromised with the socket available the attacker has access to your entire swarm given they’re smart enough to work around docker’s rather basic scheduler.

With the network API enabled and TLS Secured the ONLY attack vector for your swarm becomes the Jenkins container itself…because it’s the only thing that has access to the TLS Certificates to access the Network API…and users can’t start another container with that same access (easily at least) as they can’t start a new container through the host with the certs in them.

In MOST installations, admins will keep the socket available and turn on the Network API as a second access point, however in a REALLY secure installation, you can do the reverse and even keep the root account on the command line from accessing the swarm itself (without reconfiguration/restart of Docker). While we don’t do this often, I have found this to be an excellent way to keep a development team with login access (for other reasons) from being able to start/stop containers other than through an approved UI such as jenkins jobs. YMMV.

(Radumir) #12


I tried to insert your snipped in my Dockerfile and do not work.
The error is: ‘gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now’.

The problem is at tar extraction.

I started the Dockerfile from jenkins/jenkins:latest .

Not clue what utility to add to the chain to work.

I’m very interested in the approach of using the docker client inside the inner container that uses docker support from the outside by sharing /var/run/docker.sock.

Thanks for your attention,

Please help! Any hint/idea/tip will be appreciated!

(Sujituk) #13

Change this line from
ARG DOCKER_CLIENT=“docker-17.06.2-ce.tgz”
ARG DOCKER_CLIENT=docker-17.06.2-ce.tgz

I found the double quotes were an issue.

(Sujituk) #14

Thanks @douglasw0 this was very helpful.

(Aj07mm) #15

Thanks man! It works perfectly!

(Skyblade) #16

@douglasw0 Do you mean this setup?
I don’t see any Network API here.