I am running an interactive python docker container on Ubuntu 14.04 using docker 17.03.1. I want to share files between local host and container so that files I create in the container are visible in the local directory and vice-versa. However when I run the following command and access jupyter notebook, I see an empty working directory in the container with no files. I also don’t see any files created in the container in the local directory.
Please see attached for a directory listing. I get the same result without the -e option.
The /home/watts/python directory on the host machine contains some jupyter notebooks. I want to create new ones as well as run existing ones from the interactive docker container directly to/from the host directory.
The process launched in the Docker container runs as some user ID, by default 0 or whatever was declared as USER in the Dockerfile used to build the image.
The numeric user ID is the only important thing. The container has its own /etc/passwd file, which maps between user names and passwords, and it doesn’t have to agree with the host’s copy of the file. This leads to things like needing to make your shared Web content directory readable by the cups or mysql user, because that happens to have the same numeric user ID as www-data in your container.
The interaction with -v is again tricky, and varies by operating system and installation mechanism. When I last seriously dealt with this a year or two ago, on native Linux numeric user IDs would get passed straight through without trouble (if a host directory was mode 0770 owned by uid 99, then uid 99 in the container had full access), but on Docker Toolbox on a Mac only uid 0 and the uid matching the host system user could access anything on the host at all.
Probably with --user 1000. When you run id inside the container it won’t say “watts” (the passwd files won’t match) but you’ll have the right numeric user ID. Setting -e USER and -e USERID don’t do anything useful and potentially confuse software and I wouldn’t set them.
Thanks @dmaze. I think I have been able to pinpoint the problem following your suggestion but I don’t have a solution yet. Here’s what I do -
from the host machine I give the following command -
docker run --user 1000 -v $PWD:$PWD -p 8889:8889 --rm -it watts/python bash
This logs me into a bash shell in a container as root
Then from inside the container I give following commands -
a. useradd -m watts -u 1000
b. chown -R watts /home/watts
c. groupmod -n watts watts
d. echo “watts:‘password of watts on host’” | chpasswd
e. adduser watts sudo && echo ‘%sudo ALL=(ALL) NOPASSWD:ALL’ >> /etc/sudoers
Now while I still donot see the host’s directory listing inside the container, if I open another terminal window on the host machine and without doing ‘eval $(docker-machine env docker2)’ first if I run the same command: (i.e docker run --user 1000 -v $PWD:$PWD -p 8889:8889 --rm -it watts/python bash), I see the directory listing from a bash shell in a new container. But the moment I exit, do ‘eval $(docker-machine env docker2)’ and login again I am no longer able to see the directory listing. Also, when I see the directory listing, I am not able to ping outside the container. I am guessing this is because there was no TLS verification. However, when I don’t see directory listings I can ping an IP outside the container.
This behaviour is quite consistent and can be repeated multiple times without issues.
Hence I think this has something to do with TLS. TLS verification is probably blocking the host’s directory listing. I tried
disabling TLS verification but got the following error when trying to log in: * Are you trying to connect to a TLS-enabled daemon without TLS?
Does this make any easier for you to fix this issue?
You are doing something wrong. Especially these last two steps aren’t useful (and are dangerous, to the extent that you’re copying your password around): you never “log in” to a container with a password, and if you have root access on the host (which you do) then you never sudo in a container, you just get a root shell in the container and you’re done.
Wait a minute. The screen shot you sent looks a lot like an Ubuntu desktop terminal window. Why are you running Docker Machine at all? It sounds like there’s some missing bit of wiring in getting a directory from your host system to the intermediate VM to Docker, but you don’t need the intermediate VM at all. Just install Docker directly on your host. That might help some of these issues.
If I was going to prod at something more it’d be the VirtualBox file sharing settings but I don’t actually know what I’d be looking for there.
(From the original post it sounded like “uses the local files” was the important part, more so than “runs the Jupyter notebook UI in a prepackaged way”; it looks pretty easy to install Jupyter in a clean Python virtual environment [which has nothing at all to do with a virtual machine and isn’t really that “virtual”] and that might be an easier path towards your real goals.)