/var/run/docker.sock: connect: permission denied Jenkins slave on ecs cluster

I’m using the AWS EC2 plugin for Jenkins to spawn up Jenkins slaves when tasks are generated. Running into permission issues when trying to build docker inside docker container. I’ve looked at dozens of other posts and people frequently provide this as the answer:

  1. create docker group
  2. add jenkins user to docker group
  3. restart
  4. everything magically works

The thing is is that I can’t restart, because the jenkins slave gets spawned using the plugin, and I’m not sure how to restart it properly for it to handle the build correctly upon restart. Also, that would mean to run the restart on the host despite being in a container which sounds like a bad idea.

I’ve tried:

1. adding jenkins to sudo users in dockerfile `RUN adduser jenkins
sudo` followed by `RUN echo "jenkins ALL=NOPASSWD: ALL" >>
/etc/sudoers`
2. changing docker socket file owner `RUN chown root:jenkins /var/run/docker.sock`
3. changing docker socket permissions `chmod 777 /var/run/docker.sock`
4. using `newgrp` so I don't have to restart docker from outside the container

Basically, how do I get around not restarting the docker service while also providing sudo permissions in order to build dockerfiles inside jenkins slave container? Or if I can actually restart while still using EC2 plugin, how would I best go about that?

Current dockerfile:

FROM jenkins/jnlp-slave

USER root

RUN apt-get update &&
apt-get -y install apt-transport-https
ca-certificates
curl
gnupg2
software-properties-common &&
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/(. /etc/os-release; echo “ID") \ (lsb_release -cs)
stable” &&
apt-get update &&
apt-get -y install docker-ce &&
apt-get -y install sudo

VOLUME /var/run/docker.sock

RUN adduser jenkins sudo

RUN echo “jenkins ALL=NOPASSWD: ALL” >> /etc/sudoers

RUN usermod -aG docker jenkins

RUN chmod 777 /var/run/docker.sock

RUN chown root:jenkins /var/run/docker.sock

USER jenkins

Thank you!

Belated response, but it may help.

On a quick look, a problem I see in the Dockerfile above is that it specifies “VOLUME /var/run/docker.sock”, which will cause Docker to create a volume on the host and mount it into “/var/run/docker.sock” inside the container. That’s not what you want I think. I presume you want a connection between the docker-cli inside the container and the docker daemon on the host. Try something like this:

$ docker run --rm -ti --group-add $(stat -c '%g' /var/run/docker.sock) -v /var/run/docker.sock:/var/run/docker.sock <your-image>

Also, check this blog article from Nestybox, as it describes a number of permission-related problems when using the Jenkins Docker Agent to build/run Docker containers.

Hope that helps!

1 Like

thanks this solved my problem of accessing the docker cli on my host from within my jenkins container :slight_smile: