NFS mount inside docker container bypassing the host

Hi All,

I’m new to docker and containerization concepts; the task at hand requires me to explore potential options where a NFS share can be mounted directly into a docker run container BYPASSING the host completely i.e. host would not know anything about the NFS share(s) mounted in any container hosted on it.

One option I discovered googling on the web is as follows:

  1. Launch docker container with ‘–privileged=true’ flag.
  2. Install nfs-utils for *nix based container images.
  3. mount the share inside the container using usual *nix mount command; for-instance:
    “mount -t nfs example.tw:/target/ /srv -o nolock”

Few questions I wanted advice on:

  1. Is there a way to achieve this WITHOUT launching the container in ‘privileged’ mode?
  2. Does the solution works fine for containers running Windows images?
  3. Are there better ways to achieve the same?

Thanks.

A1: You can create a docker volume that mount the nfs volume on container start.

The command should look like this:

docker volume create \
   --driver local \
   --opt type=nfs \
   --opt o=addr=192.168.x.y,nfsvers=4 \
   --opt device=:/exported/share \
   name-of-your-volume

Of course you need to change the ip in addr to the ip of your nfs server, the nfsvers to the nfs version you use, device to the export you want to mount and the name of your volume.

Then use the volume when starting a container:

docker run -v name-of-your-volume:/path/in/container image:tag

A2: No idea.
A3: see A1.

Thanks for your inputs.

Few more question if I may ask regarding the docker volume:

  1. Is this feature available in all docker version OR one needs to be running some specific version to get it?
  2. My understanding reading docker volume is; container OS image doesn’t need to install any dependency modules to access the NFS share; is this correct?

Thanks again for your reply and suggestions.

A1: It is available in Docker CE and Docker EE. I would be surprised if it is not available in Docker packages distributed by os venders, but then again: how much do I know… I never used any of the os vendor packages.
A2: The volume will be mountend into the container target path when the container is started. The container itself does not have do anything or need to know anything about the remote share.

Sounds good. Thanks again for all help!

Taking above advice I did the following:

  1. Pulled the docker images which runs a sample nfs server from docker:hub (Docker)

  2. Pulled and ran the container using below command:

docker run -d --privileged --restart=always -v /tmp:/nfs -e NFS_EXPORT_DIR_1=/nfs -e NFS_EXPORT_DOMAIN_1=* --net=host --name=nfsServer fuzzle/docker-nfs-server:latest

  1. Create a docker volume using below command:

docker volume create --driver local --opt type=nfs --opt o=addr=<ip_of_nfsServer_container>,nfsvers=3 --opt device=:/nfs nfs_vol

  1. I can see the volume and the nfsServer using “docker volume ls” and "docker ps " commands respectively.

  2. However launching a container using the above docker_volumes fails:

docker run -v nfs_vol:/home/nfs --name=nfsClient ubuntu:latest
docker: Error response from daemon: permission denied.

I tried the path as “nfs_vol:/tmp” / “nfs_vol:/tmp/nfs”, but the error remains the same.

The ‘ubuntu’ container state is ‘Created’ and never runs.

Anyone has any suggestion what is missing from above steps?

I would strongly advice to not use a containized nfs server.
When the docker service starts and you nfs container is not started yet, all the containers depending on you nfs container will fail until the nfs container starts.Is this realy what you want?

Agreed; this is part of experimentation. Here are few goals of the experiments:

  1. Observe host running containers consuming docker nfs volume may hang if nfs server disappears BUT still the host is not hosed and continue to function (deploy new containers, destroy unrelated containers etc.)

  2. Once the nfs server recovers the container using docker volume is back functional and is not still hosed.

Do you have any experience using nfs docker volume? Please share them.

Thanks.

I am not sure about what you asked… some of my volume mount NFSv4 shares, some mount CIFS shares and some mount storageOS volumes.

What should I say: NFSv4 and CIFS work like a charm. Though, I dislike about CIFS shares that you have to provide plain text username and password. My NFS/CIFS servers run 99,99% of the time. From time to time there are messages in dmesg regarding lost CIFS connect, but the containers seem to recover from it:

CIFS VFS: Server 192.168.x.x has not responded in 120 seconds. Reconnecting...
CIFS VFS: Free previous auth_key.response = 0000000047b44839

Back in the days, when i used to mount the remote shares into a host folder and used bind volume with the containers, I had frequent trouble with stale remote shares, espcialy with NFS. As far as I remember, the containers did not recover from it and had be restarted.

Thanks for advice Metin.

Below are few experiments I did on below setup:

  1. Deployed a NFS server exposing a remote share.
  2. On a seperate host create a docker volume mounting the #1 remote share.
  3. Launch container using the above created volume.

For a failure scenario, where nfs server was shutdown; launching a new container hangs for up to 2-5 minutes, further, during that time “docker container” commands also were hung such as: “docker ps”, launch container not using nfs docker volume.

Did you/anyone experienced the same issue; is there a way to avoid docker-daemon hang if NFS volume is unavailable?

Thanks

In my productive environments the nfs servers have an avaiability of close too 100%.
Even in my Homelab my storages are never shut down.

Thus said: I have no idea if the daemon hangs or does not.

Though, I have a volume plugin that is lazy loaded. In the time gap between starting the docker daemon and loading the plugin, the docker engine behaves like it’s frozen and eventualy starts breathing and deploys all stacks.