I’m running CoreOS in several VMs. Within the CoreOS instances I’m mounting NFS shares from a local NAS, which are then passed on to the containers (NFS share sub-dirs as volumes).
After some traffic on the NAS the NFS handles turn stale within the containers, but in the underlaying CoreOS the mounts are still intact and the data is still accessible from the OS. As a consequence I have to re-start the containers, since they cannot access any data on the volumes anymore. After the re-start the volumes are back to normal until the next occurrence.
Setup & Configuration: Server VirtualBox on Windows 10 PRO several CoreOS 2247.7.0 (Rhyolite) on VMs (VB) kernel: 4.19.84 rkt: 1.30.0 docker: 18.06.3 NAS UNRAID OS, providing NFS V3 (no V4, unfortunately) fuse_remember set to 600 (setting it to -1 or 0 didn't help) Mount in CoreOS (configured in json) mount -t nfs <NAS IP>:<path in NAS> /mnt/nas Container YAML volumes: - /mnt/nas/<path>:<path in container>
The stale handles appear after some traffic on the NAS. Traffic would be in the magnitude of a few GB, sometimes even less than a GB. Sometimes after just a few minutes, sometimes after a day or two. This behaviour is very inconvenient and I haven’t found any solution to this.
I know that the NFS client has to refresh the handle every now and again. By the looks of it, the CoreOS NFS client does this correctly, but somehow the volumes in the containers don’t get updated properly.
What can I do to ensure that the containers don’t lose thir access to the volumes? Any idea anybody?