I’m new to docker and containerization concepts; the task at hand requires me to explore potential options where a NFS share can be mounted directly into a docker run container BYPASSING the host completely i.e. host would not know anything about the NFS share(s) mounted in any container hosted on it.
One option I discovered googling on the web is as follows:
Launch docker container with ‘–privileged=true’ flag.
Install nfs-utils for *nix based container images.
mount the share inside the container using usual *nix mount command; for-instance:
“mount -t nfs example.tw:/target/ /srv -o nolock”
Few questions I wanted advice on:
Is there a way to achieve this WITHOUT launching the container in ‘privileged’ mode?
Does the solution works fine for containers running Windows images?
Of course you need to change the ip in addr to the ip of your nfs server, the nfsvers to the nfs version you use, device to the export you want to mount and the name of your volume.
Then use the volume when starting a container:
docker run -v name-of-your-volume:/path/in/container image:tag
A1: It is available in Docker CE and Docker EE. I would be surprised if it is not available in Docker packages distributed by os venders, but then again: how much do I know… I never used any of the os vendor packages.
A2: The volume will be mountend into the container target path when the container is started. The container itself does not have do anything or need to know anything about the remote share.
I would strongly advice to not use a containized nfs server.
When the docker service starts and you nfs container is not started yet, all the containers depending on you nfs container will fail until the nfs container starts.Is this realy what you want?
Agreed; this is part of experimentation. Here are few goals of the experiments:
Observe host running containers consuming docker nfs volume may hang if nfs server disappears BUT still the host is not hosed and continue to function (deploy new containers, destroy unrelated containers etc.)
Once the nfs server recovers the container using docker volume is back functional and is not still hosed.
Do you have any experience using nfs docker volume? Please share them.
I am not sure about what you asked… some of my volume mount NFSv4 shares, some mount CIFS shares and some mount storageOS volumes.
What should I say: NFSv4 and CIFS work like a charm. Though, I dislike about CIFS shares that you have to provide plain text username and password. My NFS/CIFS servers run 99,99% of the time. From time to time there are messages in dmesg regarding lost CIFS connect, but the containers seem to recover from it:
CIFS VFS: Server 192.168.x.x has not responded in 120 seconds. Reconnecting...
CIFS VFS: Free previous auth_key.response = 0000000047b44839
Back in the days, when i used to mount the remote shares into a host folder and used bind volume with the containers, I had frequent trouble with stale remote shares, espcialy with NFS. As far as I remember, the containers did not recover from it and had be restarted.
On a seperate host create a docker volume mounting the #1 remote share.
Launch container using the above created volume.
For a failure scenario, where nfs server was shutdown; launching a new container hangs for up to 2-5 minutes, further, during that time “docker container” commands also were hung such as: “docker ps”, launch container not using nfs docker volume.
Did you/anyone experienced the same issue; is there a way to avoid docker-daemon hang if NFS volume is unavailable?
In my productive environments the nfs servers have an avaiability of close too 100%.
Even in my Homelab my storages are never shut down.
Thus said: I have no idea if the daemon hangs or does not.
Though, I have a volume plugin that is lazy loaded. In the time gap between starting the docker daemon and loading the plugin, the docker engine behaves like it’s frozen and eventualy starts breathing and deploys all stacks.