Embed Docker executables into Docker container

I’ve got Jenkins official image deployed into Docker container up and I need to embed Docker’s executable into Docker’s container. Running it on my Mac.

The question: How to correctly embed Docker executables to Docker’s image (i.e.Jenkins image) in order Docker to use docker commands? Please help me understand the right way of solving this problem.

Here what I did:

  1. Runned Jenkins container with additional parameter of -v /var/run/docker.sock:/var/run/docker.sock. Getting permission denied when try to access docker build.
  2. Then I assigned sudo privileges inside container to Jenkins user usermod -aG sudo jenkins to Jenkins user and rebooted, the error is still here.
root@4328a7e643ea:/ groups jenkins
jenkins : jenkins sudo
  1. Doing chmod 777 /var/run/docker.sock on the host computer and reboot then hasn’t helped too.

Additional info:

The parameters I used to create Jenkins container in Docker:

docker run --name jenkins --restart=on-failure --detach \
  --network jenkins --env DOCKER_HOST=tcp://docker:2376 \
  --env DOCKER_CERT_PATH=/certs/client --env DOCKER_TLS_VERIFY=1 \
  --publish 9000:8080 --publish 60000:50000 \
  --volume jenkins-data:/var/jenkins_home \
  --volume jenkins-docker-certs:/certs/client:ro \
  --volume /usr/local/bin/docker:/usr/bin/docker \
  --volume /var/run/docker.sock:/var/run/docker.sock \

After a whole day is wasted on reading & trying, I’ve finally gave up. Please, help me understand how to overcome this issue.

sudo will not help unless the docker socket is assigned to the sudo group which is not likely. The docke runix socket is either owned by the root user and group or accessible by the “docker” group. Groups in the container and on tho host can be different so the ID of a group is more important than the name of a group.

You used this parameter:

--env DOCKER_HOST=tcp://docker:2376

Why? It means jenkins should access the TCP socket of docker and use “docker” as hostname.

When you run the docker client in the container as root, that root user can usually access the mounted docker socket. Othe users can’t. What I do in this case is I mount the docker socket to another container which is running as root and in that container I use “socat” to forward TCP socket requests to the unix socket of Docker. That way the jenkins container needs only network access to the specific ip and port. Since socat is running in a container, it will not publish the Docker socket to the outside world, bot other containers on the same Docker network would be able to access it which still could be a security threat. In that case you can share the network namespace between the socat container and jenkins container.

If you have Docker Compose v2 (recommended), this is how you can try it. Create a file called “compose.yml” with this content:

    image: alpine/socat
    command: tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock
    user: root
      - type: bind
        source: /var/run/docker.sock
        target: /var/run/docker.sock

    image: docker:23-cli
      - socat
      DOCKER_HOST: localhost:2375
    network_mode: service:socat

    # Below parameters are only for keeping the container alive for testing
      - sleep
      - inf
    init: true

Start containers

docker compose up -d

Enter the container

docker compose exec docker sh

Run the docker commands like

docker info

Since I shared the network namespace between the two containers, localhost is the same for both containers. It wouldn’t be the case without the common network namespace, but it also means that the docker tcp socket is available only on localhost inside the containers so jenkins can access it, but other containers can’t.

Hi, Rimelek. Thank you for the super clear explanation.
I blindly used --env DOCKER_HOST=tcp://docker:2376 from the configuration was recommended in particular Jenkins Docker setup tutorial.

The main question for now: do I need to set up Docker in Jenkins server to be able to run docker build command?
Lets take I have only Jenkins master server to run jobs on it:

docker run \
  --name jenkins \
  --detach \
  --network jenkins \
  --publish 9000:8080 \
  --publish 60000:50000 \
  --volume jenkins-data:/var/jenkins_home \
  --volume /var/run/docker.sock:/var/run/docker.sock \

If connect here right after bootstrapping:

  1. I unable to run any command for docker: it says unkown command docker. Looks true, because the default Jenkins’s Docker image hasn’t Docker binaries installed.
  2. But when I run any Jenkins job using Docker plugin I’m getting permission denied instead of unkown command docker when it tries to access docker build. Can’t catch, whether it because docker.sock has come in the game, or thats how CloudBees Docker Build and Publish plugin, which I’m using, works.

After I embedded Docker’s installation into Jenkins image using Docker’s official guide, I’ve got the things working:

FROM jenkins/jenkins:jdk17
USER root
... <other stuff>
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

So I’m able to both issue Docker’s commands directly ssh-ing to the container or using it from Jenkins pipeline. And I’m able to see my host server images installed by issuing docker images, so guess docker.sock works now.
But I still need to authorize in Jenkins container to publish images to my private repo, despite on I’ve already registered from my host computer.

The docker client is just a command line interface to the Docker Daemon API. Jenkins can use the API directly using the Docker socket if it has access to it.

You also installed the docker daemon and containerd which you shouldn’t have. If you want the docker client in the container install only docker-ce-cli, nothing more. Or you can also install docker-compose-plugin is is a plugin for the cli, but I don’t think you need that.

Even though the daemon was not necessary in the image, it won’t run, so I am not sure why you could connect to the Docker socket, unless you kept the USER in the container as root:

I guess the jenkins container used another user by default for security reasons. This is what helped you I think, not instaling the docker client.

Authentication happens on the client side using your credential store. You can create a service account token and use that in the Jenkins CI pipeline which is better then exposing your original password to the container. Even when your images on Docker Hub you need to authenticate in the CI pipeline. I am sure Jenkins has a module for that too.

I did some additional research and found nice post, where almost all possible solutions were mentioned.

Generally there is two options to actually run Docker’s build job on Jenkins:

  1. Embed Docker’s binaries into Jenkins image. It could be just docker-ce-cli package.
  2. Route Docker’s binaries on the host to container VM:
docker run  \
  --name jenkins \
** --volume /var/run/docker.sock:/var/run/docker.sock \**
**  --volume $(which docker):/usr/bin/docker \**

The first option routes docker.sock to container and the second exposes Docker’s binaries from the host to the container.

Unfortunately, the second option didn’t worked for me under any circumstances: I tried to run container in the privilege mode, under the root user, added Jenkins user to root group (for home PC it’s okay), chmod 777 /var/run/docker.sock, usermod -aG docker jenkins. No way.
The last thing I gave a shot:

docker run -u root \
  --privileged \
  --name jenkins \
  --detach \
  --network jenkins \
  --publish 9000:8080 \
  --publish 60000:50000 \
  --volume jenkins-data:/var/jenkins_home \
  --volume /var/run/docker.sock:/var/run/docker.sock \
  --volume $(which docker):/usr/bin/docker \

So finally I decided to give up and just embed Docker’s executables (docker-ce-cli) into my Jenkins image with --volume /var/run/docker.sock:/var/run/docker.sock \ option, which successfully exposes all of my host-machine’s images to the Jenkins’s container.
Guess, it’s more MacOS-related issues, because I red thousands messages of Linux users, which has confirmed that routing of binaries (the 1-st way) works good for them.