I’m using the AWS EC2 plugin for Jenkins to spawn up Jenkins slaves when tasks are generated. Running into permission issues when trying to build docker inside docker container. I’ve looked at dozens of other posts and people frequently provide this as the answer:
- create docker group
- add jenkins user to docker group
- restart
- everything magically works
The thing is is that I can’t restart, because the jenkins slave gets spawned using the plugin, and I’m not sure how to restart it properly for it to handle the build correctly upon restart. Also, that would mean to run the restart on the host despite being in a container which sounds like a bad idea.
I’ve tried:
1. adding jenkins to sudo users in dockerfile `RUN adduser jenkins
sudo` followed by `RUN echo "jenkins ALL=NOPASSWD: ALL" >>
/etc/sudoers`
2. changing docker socket file owner `RUN chown root:jenkins /var/run/docker.sock`
3. changing docker socket permissions `chmod 777 /var/run/docker.sock`
4. using `newgrp` so I don't have to restart docker from outside the container
Basically, how do I get around not restarting the docker service while also providing sudo permissions in order to build dockerfiles inside jenkins slave container? Or if I can actually restart while still using EC2 plugin, how would I best go about that?
Current dockerfile:
FROM jenkins/jnlp-slave
USER root
RUN apt-get update &&
apt-get -y install apt-transport-https
ca-certificates
curl
gnupg2
software-properties-common &&
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/(. /etc/os-release; echo “ID") \
(lsb_release -cs)
stable” &&
apt-get update &&
apt-get -y install docker-ce &&
apt-get -y install sudo
VOLUME /var/run/docker.sock
RUN adduser jenkins sudo
RUN echo “jenkins ALL=NOPASSWD: ALL” >> /etc/sudoers
RUN usermod -aG docker jenkins
RUN chmod 777 /var/run/docker.sock
RUN chown root:jenkins /var/run/docker.sock
USER jenkins
Thank you!