Changing data-root path to a network shared NTFS volume

Hello docker community,

I am facing some problem issues with docker.

I would like to save docker images on a shared network volume when using docker pull <image> and to be able to run the images on the shared network volume from a host that is connected to the same network when using docker run <image> . The shared network volume is mounted using cifs protocol.

So to do that, according to the official doc of docker, it is recommended to use the daemon.json file. So I have created the /etc/docker/deamon.json and add to it the data-root = /path/to/shared/volume/docker/
For some reason, this does not worked for me. So I decided to modify the /usr/lib/systemd/system/docker.service file instead by adding --data-root option and -s overlay2 because it is the recommended filesystem in the docker documentation but without success.

I have made some tests to be able to debug, below is my tests and results.

  1. mount the volume directly to the host (via USB without the network) + overlay2
    SUCCESS
  2. mount the volume from the network using cifs to the host + overlay2
    FAILED
  3. mount the volume from the network using cifs to the host + vfs
    FAILED

The first test was to make sure that we can pull/run images using overla2 to an external volume.
The second test is the one we need to make it succeed, however, it failed with the error below (NOTE that in the first one it succeeded using overlay2!!! weird!!!)

dockerd[53541]: level=info msg="Starting up"
dockerd[53541]: level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf"
dockerd[53541]: level=info msg="[graphdriver] trying configured driver: overlay2"
dockerd[53541]: level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2
dockerd[53541]: failed to start daemon: error initializing graphdriver: driver not supported: overlay2
systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE

The third test was to see other filesystem than overlay2 because according to the error message above. So I am able to pull images on the shared network volume with success but I am not able to run them. It failed with the error below:

docker: Error response from daemon: symlink /proc/mounts /path/to/shared/volume/docker/vfs/dir/4b01720de058d734cc7769563f0592f970278c961c2570989b51935f546410a3-init/etc/mtab: operation not supported.
See 'docker run --help'.

and according to this link here they suggest to use the json file which does not worked for me.

Thank you in advance.
Best regards,

You can not use a remote cifs or nfs share for the data-root. If your key requirement is to store the data-root on a remote system, you could consider mounting an iscsi block device for the data-root.

The documentation covers which storage driver works with which backing filesystem: https://docs.docker.com/storage/storagedriver/select-storage-driver/#supported-backing-filesystems

1 Like

Hello @meyay ,

Thank you for your reply.

The goal is to store the images when runing docker pull <image> from any host connected to the same network and to be able to run it with docker run <image> as well.

I will have a look at iSCSI and keep you updated.

Many thanks,

No need. It will not help with this scenario.

You need to run a private container registry, so that nodes can build/push images, and other nodes can pull and use those images.

1 Like

Thank you for your answer.

However, after looking into container registry, the images are still pull/run on the Host. (because data-root is still /var/lib/docker)

But I would like to pull images and run them on the shared volume and not on the Host internal memory.

Thanks

I have found the solution.

So you need to mount the shard volume with mfsymlinks option and force the docker storage to VFS.

It might appear like a brilliant idea, but the docker data root is not meant to be used from more than a single docker engine. This is a call for trouble, as the folder includes several file based boltdb databases, which will eventually corrupt if more than one docker engine modifies settings in them at the same time.

Furthermore, VFS is the least efficient storage driver, because it will waste a lot of space, and will have terrible performance compared to overlayfs. According docs, the VFS storage driver is for testing only, and not meant to be used for production systems.

2 Likes

Thanks for taking the time to reply.

I’ve been bashing my head against the wall trying to get this to work and ran into the same problem as OP, so your advice is appreciated.

Some container apps that are used to backup/serve media (Immich, tubearchivist, Damselfly, Jellyfin etc.) will quickly grow in size with data that is rarely accessed. It would be nice if we can run a Docker and it’s container image from a device that has the compute power to speed up computing tasks (DesktopPC, Laptop, HEDT) but outsource the storage to a location on the network like a NAS (RaspberryPi, Synology, NUC etc).

However if storing Docker data on a network location is not recommended we would have to have a both compute and storage capabilities on one device.

What would be the ideal solution for this use case where we would want to separate compute and storage?

I feel the data-root part is already covered and there is nothing more to say about it.

What’s wrong with having the data on a (docker) named volume backed by a remote share? If you search the forum for “nfs volumes”, "cifs volumes, I am confident that you will find some decent examples on how volumes backed by remote shares can be used.