Docker Community Forums

Share and learn in the Docker community.

Docker fails to mount -v volume from NFS-mounted directory

I’ve got my application sources mounted over NFS from a file server in $HOME/app.
When I start a container with

mount nfs-server:/path/to/app $HOME/app
docker run -v $HOME/app:/app myimage

I end up with empty /app inside the container. When I copy the $HOME/app to a local filesystem I can mount it inside the container just fine. Why is that happening? And is there any workaround other than putting the app directly into the image (that’s inconvenient for development).

Thanks!

I have never mounted a NFS fileshare into a container as you did. What I have done was doing the nfs mounting right from within the container. For that to work, I had to provide the " --privileged=true" option to the “docker run” command. I wonder if that will help

This doesn’t really help as then I would need to handle the NFS mounting inside the container, ie have the NFS tools installed, run it when container starts, handle failures, etc. Much better would be the ability to mount a remote filesystem as a volume from outside of the container.

Even something like docker run -v nfs-server:/path-to-app:/app would be good enough.

My 2 questions are:

  1. Why docker doesn’t work with a volume mounted from NFS?
  2. How can I work around it (other then mounting the share from inside the container)? Is this something planned for the new versions?

Thanks!

I am able to add NFS mounts to docker containers

1st mount the NFS mountpoint

mount 172.27.102.4:/mnt/vol1 /mnt/freenas

then use -v to expose the path (/mnt/freenas) to your container

docker run -t -i -v /mnt/freenas/:/mnt/somewhere ubuntu

You should now see /mnt/somewhere in your container.

That’s exactly what didn’t work for me. What docker version have you got?

Currently at 1.5.0, but I’ve been doing this for a while.

Maybe a permission issue with the NFS mount. Can you write into the nfs share? does :

touch /mnt/freenas/test create a file or error with a permission issue?

Does the following work?
docker run -t -i -v /tmp:/mnt/somewhere ubuntu

We think this was fixed in 1.5 and later

because it does work:

○ → mount | grep sarek
sarek:/storage on /mnt/sarek type nfs  (rw,addr=2001:470:b0e2:0:5054:ff:fed3:9e25)
○ → docker run -v /mnt/sarek:/nfs debian:jessie ls -al /nfs/mp3s | tail -3
drwxr-xr-x  3 50000 50002         3 Sep 19  2012 incoming
drwxr-xr-x  2 50000 50002        11 Dec 29  2004 orchestral
drwxr-xr-x  3 50000 50002        20 Sep 19  2012 themes

Docker 1.6.2. It doesn’t work even in privileged mode.

It looks like docker ignores nfs mount and creates a new folder. I tried to unmount nfs folder, but the folder created by docker remains.

It looks like restarting docker service before running a container with volume mapping to an nfs share does the trick.

5 Likes

I’m trying with docker 1.8.3 and I’m seeing the behavior where volumes exposed from host server NFS mounts don’t really work inside the container, which is the artifactory container from JFrog. But, trying with Ubuntu latest yields success.

So, the issue appears depending on the container setup. (But I’m not sure how, yet.)

@tgoeke it seems I have here the same kind of issue with artifactory container using volume being NFS mount on the host. In my case artifactory is really long to startup. Most of the case it did not finish starting up after several days… Did you find a solution ?

Sorry I haven’t pursued this any further.

I’m currently evaluating Docker and I might be wrong but I surmise this depends on your Docker network configuration. Using the default (docker0) bridge implies masquerading, aka. NAT (from the container point of view). And NFS abhors NAT.

for my specific problem, the solution was actually to switch from NFSv3 to NFSv4

I’m experiencing the same problem with Docker 1.11.1. Restarting the Docker daemon apparently solve the problem, however this workaround does NOT work when the mount and the restarted are applied automatically in a boot cloud-init script (Amazon EC2 instance).

Is this a Docker bug or an expected behaviour? Is there any way to solve this issue?

You may try this below:

I believe https://github.com/SvenDowideit/docker-volumes-nfs was an example for writing volume drivers. Does anyone know if there is a generic volume driver for NFS? or is using -v still the only none vendor specific method?

You should always mount shares via /etc/fstab. I guess Docker doesnot see it unless you put it on /etc/fstab (and reboot machine).

Give it a try!

Anyone know if there is any progress on this? It is a major annoyance on ECS on AWS.

Hi @pditommaso,

you need to do the mount before the #cloud-init section, e.g. in #cloud-boothook, as Docker is started before cloud-init. See the instance start logs:

Starting docker: …[ OK ]
Starting cloud-init