Using docker in a dockerized Jenkins container

The topic name sounds a bit silly, but this is what I want:

  • I’m using a quiet simple Dokerfile, based on the official Jenkins dockerfile.

    FROM jenkins
    USER root
    RUN apt-get update && apt-get install -y docker.io && rm -rf /var/lib/apt/lists/*
    user jenkins

It’s just the basic Jenkins container, with the installation of docker.io added. After the container starts, I have a fine working Jenkins environment. Now I want that Jenkins environment to monitor a GIT repository for changes. Once a change occurs, the Jenkins environment should start a “docker build” and a “docker run”. In other words: Jenkins should start a docker container which will be used for compiling the source code.

The problem I’m facing is that the Docker daemon is not started inside my initial dockerfile. So my rather simple question is: how can I change the above mentioned docker file so it starts/runs the Docker daemon before starting the Jenkins instance. (click here for the official dockerfile used).

[edit]
Just some more clarification, what I want is: when I run “ps aux | grep docker” inside the container, I want to see the docker daemon running.

1 Like

You have at least 2 options.

Docker in Docker is doable, or you could give your jenkins access to your Docker socket, either by using a tcp based socket, or (better imo) by giving it the unix socket using a bind mount.

e.g.,

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) ubuntu bash

will let your container run the docker client, talking to the default unix socket :slight_smile: and you can script from there.
I note that https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin suggests the insecure option of making the raw socket available on all network interfaces, so that could do with updating.

mmm, and then there’s http://jenkins-ci.org/content/official-jenkins-lts-docker-image - the official jenkins Docker image, maybe something can be added to it :slight_smile:

3 Likes

Yes - like @sven said - bind mount in the docker socket - and then ensure the docker (client) is installed - and you should be good to go. You can use docker run, build etc from freestyle build steps. You can also use the docker build and publish plugin, and probably a few others (the docker plugin - as mentioned - does talk via http which is unfortunate at this time). but it sounds like you only want to run docker commands - so Svens solution is what I use myself (I am one of the maintainers of official jenkins image)

Wow, thanks! Didn’t know that sockets could be mounted. It looks like it works almost fine now. I’m starting the Jenkins container with the following command:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -p 8080:8080 -v /smb/jenkins_home:/var/jenkins_home jenkins-docker

Jenkins is running and “sees” a change in the repository. It thens tries to build and run a docker container by using the binded Docker socket. Unfortunately I receive an error:

docker: error while loading shared libraries: libapparmor.so.1: cannot open shared object file: No such file or directory

Some more ideas? I’m using the following versions:

[dev-server-rogier][~] → docker version
Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

1 Like

Hello all,

I have exactly the same issue and get the same error, with the latest version of Docker (on Debian 8.0):

Did you finally find a solution to this silly message?

Thanks!

I finally found a solution, which is a hack, but working well actually.

What you can do is run docker from SSH, using a user with sudo rights (if you need to restart / create / update a contener etc…)

If you are interested in this solution, you should take a look at : http://blog.milehighcode.com/2014/02/execute-ssh-command-as-root-from.html and follow the procedure…

It’s working in my case for website automatic deployment, triggered by push / changes on GitHub…

Enjoy :smile:

some discussion on why the libapparmor.so.1 error occurs: https://github.com/docker/docker/issues/15024

the work-around I used to get this working was to install lxc within the dockerfile for the container being run

RUN apt-get install lxc

this should make libapparmor.so.1 available within the container

In general it is better to use only a docker client inside a container and mount the /var/run/docker.sock so you can run command on the docker daemon installed on the host itself.

there is a docker image on the hub for this
https://hub.docker.com/_/docker/

and also you might useful this step by step tutorial I did for jenkins and docker pipeline

how resolve this error

docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory
1 Like

I just set up Jenkins in a container with further containers started by Jenkins. I installed the whole docker packages into the Jenkins container. I’m pretty sure it is not all needed as the docker engine runs outside of this container. But this way the docker command works.

The essential parts from my Dockerfile:

FROM jenkins/jenkins:2.73.2

# install docker, docker-compose, docker-machine
# see: https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
# see: https://docs.docker.com/engine/installation/linux/linux-postinstall/
# see: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

USER root

# prerequisites for docker
RUN apt-get update \
    && apt-get -y install \
        apt-transport-https \
        ca-certificates \
        curl \
        software-properties-common

# docker repos
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
    && echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" >> /etc/apt/sources.list.d/additional-repositories.list \
    && echo "deb http://ftp-stud.hs-esslingen.de/ubuntu xenial main restricted universe multiverse" >> /etc/apt/sources.list.d/official-package-repositories.list \
    && apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 437D05B5 \
    && apt-get update

# docker
RUN apt-get -y install docker-ce

# docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose \
    && chmod +x /usr/local/bin/docker-compose

# give jenkins docker rights
RUN usermod -aG docker jenkins

USER jenkins

And add volume /var/run/docker.sock:/var/run/docker.sock. I do it via docker-compose.yml. But keep in mind that all docker containers are running directly on the host, not inside the Jenkins container. So you will get some problems with path names if you use further volumes.

1 Like

So, really, unless you’re trying to run Docker-in-Docker (it works, but is not likely the recommended thing to do for just accessing the docker client) your best bet is to install the docker CLIENT binaries into your container instead of the entire docker server stack.

the “Run” Stanza for the Dockerfile:

ARG DOCKER_CLIENT=“docker-17.06.2-ce.tgz”

RUN cd /tmp/
&& curl -sSL -O https://download.docker.com/linux/static/stable/x86_64/${DOCKER_CLIENT} \
&& tar zxf ${DOCKER_CLIENT} \
&& mkdir -p /usr/local/bin \
&& mv ./docker/docker /usr/local/bin \
&& chmod +x /usr/local/bin/docker \
&& rm -rf /tmp/*

All in one step so you don’t bloat your dockerfile. This keeps only the docker client binary and drops all of the rest of the cruft out of the package. Because it’s being pulled from Tar, you’re not going to get any missing library issues since it’ll be statically linked. Of further benefit: This works in any OS, not just Ubuntu…meaning if you ever change the base OS your Jenkins build is built off of (or have a corporate standard and roll your own, or use the alpine version instead of the ubuntu version, or…) you don’t have to change the way you run Jenkins, Docker or the client utilities.

As a second recommendation, if security is a thing for you, I REALLY recommend using the TCP Network API, secured through TLS, and just baking the certificates into your Jenkins installation. You can then set “ENV DOCKER_HOST=tcp://docker_host:2376” in your dockerfile. Why? The /var/lib/docker/docker.sock is pretty much root access to your entire docker cluster…not just the server that socket file is running on, but if you’re running swarm it gives you access to ALL systems on the ENTIRE swarm. If any server is broken or a container is compromised with the socket available the attacker has access to your entire swarm given they’re smart enough to work around docker’s rather basic scheduler.

With the network API enabled and TLS Secured the ONLY attack vector for your swarm becomes the Jenkins container itself…because it’s the only thing that has access to the TLS Certificates to access the Network API…and users can’t start another container with that same access (easily at least) as they can’t start a new container through the host with the certs in them.

In MOST installations, admins will keep the socket available and turn on the Network API as a second access point, however in a REALLY secure installation, you can do the reverse and even keep the root account on the command line from accessing the swarm itself (without reconfiguration/restart of Docker). While we don’t do this often, I have found this to be an excellent way to keep a development team with login access (for other reasons) from being able to start/stop containers other than through an approved UI such as jenkins jobs. YMMV.

1 Like

Hi!

I tried to insert your snipped in my Dockerfile and do not work.
The error is: ‘gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now’.

The problem is at tar extraction.

I started the Dockerfile from jenkins/jenkins:latest .

Not clue what utility to add to the chain to work.

I’m very interested in the approach of using the docker client inside the inner container that uses docker support from the outside by sharing /var/run/docker.sock.

Thanks for your attention,

Please help! Any hint/idea/tip will be appreciated!

Change this line from
ARG DOCKER_CLIENT=“docker-17.06.2-ce.tgz”
to
ARG DOCKER_CLIENT=docker-17.06.2-ce.tgz

I found the double quotes were an issue.

Thanks @douglasw0 this was very helpful.

Thanks man! It works perfectly!

@douglasw0 Do you mean this setup?
I don’t see any Network API here.

This blog has good info on some of the challenges of running the Dockerized Jenkins, in particular when the Jenkins pipe needs to build or run Docker containers (via the Jenkins Docker plugin).

The blog proposes a solution that places Jenkins + Dockerd inside a container, and uses Docker-in-Docker to run the inner pipeline steps. It has the benefit of not messing with the host Docker and isolating Jenkins and it’s Docker build operations inside a container.

Hope that helps!

I followed the recipe in @ctadello’s blog and it worked perfectly for me! More details here.

Hi all,
I am trying to run docker commands inside jenkins container in centos OS using dood (Docker out of Docker) method. Below is the snapshot of my Dockerfile

I have used below command to run my jenkins container
docker run -it --name jenkins-docker -p 8081:8080 --network jenkins -v jenkins-data:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins-image:v2

My jenkins container is running fine and I am able to access jenkins from browser but I am not able to run docker commands inside jenkins container

bash-4.4$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied

Can someone please help me with this error?

What it happens is that the group id of the docker group into your image is different from the group id of the docker group in your docker host where your jenkins container is running.
Try to substitute your commented line:

RUN groupadd docker

by:
RUN groupadd --gid <gid_of_docker_group_in_your_host> docker

This solution limits the portability of the generated image, once the image is generated it only works on hosts whith the same docker gid.

Hope this helps.