Docker Community Forums

Share and learn in the Docker community.

Managing file owner and protection through a dockerfile


#1

I’m trying to refine a CentOs-based dockerfile that will run as a server, and which needs to create a correctly configured .ssh directory for the server account so that server both can be connected to and can connect out securely.

I’m sure the right answer is to keep the .ssh directory external somehow, since (a) it is data that may be customized per instance, and (b) that customization wants to persist through container restarts. (There are probably other subdirectories of the home directory that want to be mounts/volumes – error logs, run history, that sort of thing – so they too persist.) But I haven’t gotten that far yet; currently I’m trying to mount the whole user directory as a volume.

My Dockerfile is trying to create the user and the user directory (RUN useradd), to COPY an initial template .ssh directory into the user directory, and to set the protections (RUN chmod) appropriately, I know we have the --chown option on COPY, but I don’t think the complementary --chmod option has been added yet.

But apparently I’m getting messed up by filesystem layering during docker build. I can see a chmod or chown taking place, if I do an ls in the same RUN as the chmod … but when I start the container, those changes have not been applied; apparently they went into a temporary filesystem which did not get merged back into the image. (Right?)

Setting up a server’s .ssh configuration directory can’t be an uncommon task for Dockerfiles … but my attempts to websearch for working examples have been finding lots of confusion and very few answers. I’m SURE there is a Best Practice solution…

I hate to have to resort to begging, but if someone could point me to illustrations of How To Do It Right (ideally with discussion of why my attempts have been unsuccessful, so I can better understand why the various “obvious” attempts have been failing), I would greatly appreciate it.

“Rule given to student pilots: If lost, climb and confess.”


(Fsejoseph) #2

Can you post the dockerfile?


#3

Let me see if I can come up with a simplified example. The problem is that I’ve tried MANY “reasonable” approaches, and none of them have yet worked…

I just tried making the .ssh directory itself a Volume, as some folks had suggested. Result is that COPY into it, even with --chmod, winds up with the files owned by root … and the .ssh mountpoint is rwxr-xr-x and can’t be changed at runtime.


#4
16:18 $ cat ./mkaddssh
# After building a explorys/jenkins:lts-centos image, use it as a base to 
# build Indexing image on top of that.

service docker start
sudo docker build -f Dockerfile-addssh --rm -t jenkins/jenkins:addssh .

16:19 $ ./mkaddssh
Redirecting to /bin/systemctl start docker.service
[sudo] password for keshlam1: 
Sending build context to Docker daemon  14.36MB
Step 1/4 : FROM jenkins/jenkins:lts
 ---> 5907903170ad
Step 2/4 : USER root
 ---> Running in 1a3a69e51a2c
Removing intermediate container 1a3a69e51a2c
 ---> 3867d5f3e30f
Step 3/4 : COPY ssh/* $JENKINS_HOME/.ssh/
 ---> ccf078e0f829
Step 4/4 : RUN chmod 700 $JENKINS_HOME/.ssh;  ls -ld $JENKINS_HOME/.ssh;  ls -l $JENKINS_HOME/.ssh
 ---> Running in 1c688222b489
drwx------. 2 root root 4096 Dec 13 21:25 /var/jenkins_home/.ssh
total 44
-rw-------. 1 root root  411 Dec 13 21:16 authorized_keys
-rw-------. 1 root root 1679 Dec 13 21:16 indexing-configs-deploy-key
-rw-r--r--. 1 root root  411 Dec 13 21:16 indexing-configs-deploy-key.pub
-rw-------. 1 root root 1675 Dec 13 21:16 indexing-jenkins-agent-key
-rw-r--r--. 1 root root  411 Dec 13 21:16 indexing-jenkins-agent-key.pub
-rw-------. 1 root root 1679 Dec 13 21:16 indexing-jenkins-deploy-key
-rw-r--r--. 1 root root  411 Dec 13 21:16 indexing-jenkins-deploy-key.pub
-rw-------. 1 root root 1675 Dec 13 21:16 indexing-scripts-deploy-key
-rw-r--r--. 1 root root  411 Dec 13 21:16 indexing-scripts-deploy-key.pub
-rw-r--r--. 1 root root  884 Dec 13 21:16 known_hosts
-rw-------. 1 root root 1675 Dec 13 21:16 nokey_id_rsa
Removing intermediate container 1c688222b489
 ---> 581035d5d5c5
Successfully built 581035d5d5c5
Successfully tagged jenkins/jenkins:addssh

### OK, that's what I would have wanted to see.
### But when I run it and look at the result...

16:25 $ cat ./launchAddssh
# Kluge to check sudo pw before forking
sudo echo prechecked password...

# Flush any previous, including making sure the local Jenkins daemon
# in the docker host OS isn't running.
sudo service jenkins stop

sudo service docker stop
sudo service docker start

# --init runs the zombie catcher
sudo docker run --rm -d --init -p 8080:8080 \
 --name Indexing \
 -v jenkins_home:/var/jenkins_home \
 jenkins/jenkins:addssh

echo "(Brief wait to let Jenkins launch...)"
sleep 10
sudo docker ps

16:28 $ ./launchAddssh
prechecked password...
Stopping jenkins (via systemctl):                          [  OK  ]
Redirecting to /bin/systemctl stop docker.service
Redirecting to /bin/systemctl start docker.service
e29cf4c41ac18ea856a905e82903fd84339f1030ffe24721afcd2a5a623e5d73
(Brief wait to let Jenkins launch...)
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                               NAMES
e29cf4c41ac1        jenkins/jenkins:addssh   "/sbin/tini -- /usr/…"   12 seconds ago      Up 10 seconds       0.0.0.0:8080->8080/tcp, 50000/tcp   Indexing

16:29 $ cat ./shOnDocker
#!/bin/bash
# Probably should filter to the docker job name, but...
lastjob=($(sudo docker ps | tail -n1))
cmd=$*
if [ -z "$cmd" ]; then cmd="bash"; fi
sudo docker exec -u jenkins -it $lastjob $cmd

16:29 $ ./shOnDocker 
jenkins@e29cf4c41ac1:/$ cd
jenkins@e29cf4c41ac1:~$ pwd
/var/jenkins_home
jenkins@e29cf4c41ac1:~$ ls -ld .ssh
drwxr-xr-x. 2 jenkins jenkins 4096 Dec 12 21:09 .ssh
jenkins@e29cf4c41ac1:~$ ls -l .ssh
total 44
-rw-r--r--. 1 jenkins jenkins  411 Dec 12 14:22 authorized_keys
-rw-r--r--. 1 jenkins jenkins 1679 Dec 12 14:22 indexing-configs-deploy-key
-rw-r--r--. 1 jenkins jenkins  411 Dec 12 14:22 indexing-configs-deploy-key.pub
-rw-r--r--. 1 jenkins jenkins 1675 Dec 12 14:22 indexing-jenkins-agent-key
-rw-r--r--. 1 jenkins jenkins  411 Dec 12 14:22 indexing-jenkins-agent-key.pub
-rw-r--r--. 1 jenkins jenkins 1679 Dec 12 14:22 indexing-jenkins-deploy-key
-rw-r--r--. 1 jenkins jenkins  411 Dec 12 14:22 indexing-jenkins-deploy-key.pub
-rw-r--r--. 1 jenkins jenkins 1675 Dec 12 14:22 indexing-scripts-deploy-key
-rw-r--r--. 1 jenkins jenkins  411 Dec 12 14:22 indexing-scripts-deploy-key.pub
-rw-r--r--. 1 jenkins jenkins  884 Dec 12 14:22 known_hosts
-rw-r--r--. 1 jenkins jenkins 1675 Dec 12 14:22 nokey_id_rsa
jenkins@e29cf4c41ac1:~$ 

################################################################

And that’s the problem. All the files are rw-r–r--, including those
which were rw------- in the source directory. And the .ssh directory
itself is rwxr-xr-x, even though the source directory was rwx------.

The sshd system is very picky about having private keys and the
user’s .ssh directory protected properly. I don’t actually need all of
these keys in both directions, but our Jenkins jobs do need some (for
connecting out to our git server, and for communications with other
workers)

(BTW: Ideally, I’d actually like to restructure this so the
jenkins_user directory itself is locked down as part of the image,
with only the .ssh, logs, run history, and similar data being
persistantly editable; the folks running these Jenkins jobs are not
the ones maintaining them. But that makes for an even more difficult
situation; if I try to set up a Volume just for the .ssh directory,
Jenkins tells me that I can’t alter its access protections either in
the Build or Run phases. Again, I would expect that by now someone
else has had a similar requirement of exposing only the .ssh
configuration and there’s a standard solution, but…)

There must be a way to initialize the .ssh directory that doesn’t
involve my manually altering the jenkins_home volume or writing an
ENTRYPOINT script that fixes the protections every time the container
starts. I can’t believe this isn’t a completely solved problem with a
standard answer. But websearch and experimentation, for entirely too
many days, have failed to find that example.

Either Docker is missing something obvious or I am. I’m presuming the
latter… Help?


#5

Oh, did I forget to attach the dockerfile itself? It’s trivial:

################################################################
FROM jenkins/jenkins:lts

USER root

COPY ssh/* $JENKINS_HOME/.ssh/
RUN chmod 700 $JENKINS_HOME/.ssh;\
  ls -ld $JENKINS_HOME/.ssh;\
  ls -l $JENKINS_HOME/.ssh
################################################################

#7

Oh. I have a guess about what’s going on…

VOLUMEs are initialized from the image only when they do not already exist. I’m guessing that’s what’s causing the problem: Either I’m failing to remove the image before the first RUN (so it isn’t getting the properly-protected version copied into it), or the fact that the base image (jenkins/jenkins:lts) creates the VOLUME is keeping the derived image (jenkins/jenkins:addssh) from writing its additional/specialized data into the VOLUME.

The first is easy to check, and would have an obvious fix.

The latter… only fix I see would be to either restructure the Dockerfiles or do more of the setup in the ENTRYPOINT script. Which I may need to consider anyway, if I need to release upgraded Dockerfiles to be applied while retaining previous state.