Offline Bind Volumes

I am having problems with docker volumes which reside on my NAS and which are not always available because the NAS is offline. The volumes use the local driver and are ‘type=cifs’

When I try to start a container which refers to offline volumes, I get the following (totally expected!) error:

ERROR: for plex Cannot start service plex: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting “/var/lib/docker/volumes/plex_movies/_data” to rootfs at “/movies” caused: stat /var/lib/docker/volumes/plex_movies/_data: host is down: unknown
ERROR: Encountered errors while bringing up the project.

However, once the container is running, then it is not a problem to take the volumes offline for a time and then bring them back online again later - the container handles the missing files correctly.

Is it possible to start a container even if one or more of the volumes referenced in the docker-compose file are not available online at that moment?

If it is possible to take the volumes offline when the container is running, then it must surely be possible to start the container when the volumes are offline?

I am using the linuxserver/plex image here but I think the question is more general and applies to any container which refers to a docker volume which is not always available.

Thanks for any guidance.

You might want to take a look at this thread:
http://forums.docker.com/t/how-to-stop-missing-volumes-prevent-containers-starting/

@meyay Thanks - that thread discusses my very problem. It has a much better title too!

However, I don’t understand the solution that was found - was there one?

In that thread, there was initially some misunderstanding about what sort of volumes were being used. To be clear,I am using named volumes which are created like this :

docker volume create
–driver local
–opt type=cifs
–opt device=’//192.168.1.71/music
–opt o=‘user=xxxx,password=yyyy’
plex_music

And I refer to them in my docker-compose file like this:

version: “2.1”
services:
plex:
image: ghcr.io/linuxserver/plex:bionic
container_name: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
volumes:
- /docker/plex:/config
- plex_music:/music:ro
restart: unless-stopped
volumes:
plex_music:
external: true

This results in the problem described in my original post.

I have also tried using bind mounts in a docker file like this :

version: “2.1”
services:
plex:
image: ghcr.io/linuxserver/plex:bionic
container_name: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
volumes:
- /docker/plex:/config
- /mnt/music:/music:ro
restart: unless-stopped

where I have mounted the music share from the NAS into /mnt/music prior to starting the container.

This method has the same problem - /mnt/music must exist before the container will start.

I suspect the solution should be a docker --mount option which tells docker to continue even if the mount fails.

Any ideas?

Thanks

My NAS is powered on 24/7 and I do use named values (though, mine point to nfs instead) as well. Works like a charme.

Though with named volumes, it is not goint to work if the remote shares are not available when the container starts. The only workaround that commes to mind is to mount the remote shares into an existing host folder lets say /mnt/media/{section} and bind-mount the parent folder /mnt/media into the container. I personaly would never use it like that… Think about what will happen if cleanup of deleted items is enabled and you do hourly scans… this will end up beeing a mess.

@meyay Thanks - I had been thinking about something like that as well - not tried yet.

So I gather that there was no solution to the OPs question in the thread you referenced? He suggested it was solved somehow.

As an alternative approach, I have been thinking about some sort of pre-start hook which would send a WOL to the NAS before the container starts. Something like this looks like a good starting point https://github.com/jizhilong/docker-wait

Thanks

Actualy it was solved in the other thread. Though, the details of the responses gave me an inconlusive picture which didn’t realy allowed me to actualy undestand what the final piece of the puzzle was… :thinking:

With Kuberentes this could be done very easily: you would just add an init container, which could trigger WOL and wait for the remote share to come up, before it starts the main container. Swarm has no init container concept.

@meyay Thanks again.

I have no experience with kuberentes so would have a big learning curve before I could get this simple problem solved. I might just have to start leaving my NAS on 24/7 like you !

Unless your NAS draws a lot of power then having it running 24/7 is the right way to go imho (and if it does require a lot of power then it’s probably time to change it up to something else - either a newer NAS or a NUC).

The learning curve for Kubernetes (K8s) is not insignificant however, since you already have docker experience you should be able to get going pretty quickly. If you use Docker Desktop for Windows, you can turn on K8s support with one checkbox so no need to build a cluster from scratch. Add WSL2 and Windows Terminal and you have everything you need. Taking the K8s journey further, I would suggest taking a look at K3s (maintained by Rancher, and now a CNCF project) and it’s sister project K3d (this is probably the easiest to start with).

HTHs

Fraser