I created a plugin volume driver that stores volumes on a remote file system. I then created a standalone volume with the following command:
$ docker volume create --driver=my-driver --name=geo-vol --opt BindDirectory=/work/docker-dev/volumes/geo-vol
geo-vol
And it works:
$ docker volume inspect geo-vol
[
{
"Name": "geo-vol",
"Driver": "my-driver",
"Mountpoint": "/work/docker-dev/volumes/geo-vol"
}
]
$ ls -al /work/docker-dev/volumes/geo-vol/
total 0
drwxrwxrwx 1 vagrant vagrant 68 Feb 3 19:19 .
drwxrwxrwx 1 vagrant vagrant 680 Feb 3 19:19 ..
I am able to mount the volume in multiple containers and read and write data from all the containers. It’s a great way to share data between containers! But there’s a problem…
I reboot the machine, and now Docker has forgotten about the volume and/or which driver it uses.
$ docker volume inspect geo-vol
Error response from daemon: no such volume
At first, I assumed the problem was with my driver. A quick glance at where my driver stores metadata about volumes shows that the storage is intact and the volume metadata exists. So then I look at my driver’s HTTP logs, and there’s no record that Docker made an HTTP request for the inspect
call. So I try to create the volume again:
$ docker volume create --driver=my-driver --name=geo-vol --opt BindDirectory=/work/docker-dev/volumes/geo-vol
Error response from daemon: The volume geo-vol already exists.
That request shows up in my driver’s HTTP logs. So, the driver is working. It’s not spitting out any errors, it creates volumes properly, the volumes can be mounted and used, etc. But as soon as the machine is rebooted, Docker forgets that the volume exists or forgets that the volume belongs to the driver, because it no longer sends inspect/mount requests to the driver, and the volume no longer works. None of the rest of my volumes have any problems (I can create a volume without --driver
and it survives reboot).
Help?