Docker and Data Volumes to Shared Multi-Host Storage


I’m trying to come up with a design for a initially small-medium infrastructure that uses docker and shared multi-host storage, but I’m not entirely sure which option would suite best or be the most feasible…

The idea is to setup a Docker infrastructure with access to a shared storage on which all images, data and shared configuration files are to be stored.

Initially we would have 4 hosts, in two different locations/DCs (two hosts in each DC), and maybe in the future we’d add more. And we need all 4 hosts and all containers on these to have access to the same storage (using Data Volumes).

This storage is to store the following:

  • Docker images. The idea is to have one image on the storage, and be able to deploy multiple containers from the same image, avoiding the need to create the image multiple times.
  • Common/Static config files. The idea is that for some apps and containers running the same app, we have one set of configuration files on the storage, and all apps on the containers read the configuration from there. This way if we need to do a change to the configuration, we can change it just once to affect all apps on all containers.
  • Application Data. The idea is that Data generated by/within the apps is also stored on the shared storage, and accessible by other containers running the same app.

We’ve thought of two different ways of doing this:

Using a High Availability (HA) NFS server.

The idea would be to have either A) one NFS server in each DC, making one primary and replicating it on the other one, or B) two NFS servers in each DC, were Server A replicated Server B in DC1, and this is replicated to DC2.

Using Ceph

The idea would be to have either A) the Docker hosts setup as described initially, and another set of servers across the DCs to run Ceph, and have the Docker hosts and containers mount the location, or B) configure the Docker hosts to also run Ceph on a separate disk/partition.

The Ceph Option B is the one which seems best to us.

Ceph Option B (Link to image)

The storage would be setup in the following way:

Ceph Option B Storage Design (Link to image)

What do you guys think? Do you believe this is a viable option? Any better idea? Has anyone tried/done something like this?

Any comment/suggestion will be appreciated!


1 Like


About the Option B on using Ceph. Well, I kind of tried it, I had a 2 different machine with a mounted common location. Let’s say /root/mount using ceph, I configured my docker to point to /root/mount/docker as the data directory… So the same data is reflected on all the mounted hosts, but I could only start the docker service in one of the host, when I tried to start the docker service in another host with the same data directory, I get the error that it is locked by another process.

Are you implying the same scenario which I have mentioned above? Have you tried the same? If you had obtained a different result, please do share your thoughts…

By the way, the option A on using Ceph , it does work fine. Since the service works separately and only data volumes are commonly mounted.

Thanks and Regards,

1 Like

Hi @sarukazen,

Thank you for your response. I have tried using Ceph Option B. I have a Ceph cluster with CephFS setup on 3 servers (OSDs) and these servers do also have docker installed. I also mounted the CephFS on these three servers on “/datafs”.

I then modified the “/etc/docker/daemon.json” file and added the following:

"data-root": "/datafs/dockerbase"

In older versions of Docker “data-root” is actually “graph”.

This makes Docker run out of that directory.

Docker starts fine on the first host, but when trying on the other two, it fails. I suppose it is because of what you described, the folder/files are locked by another process.

I suppose it is not possible to run multiple instances of Docker out of the same files. It is probably to avoid issues like one instance writing something to a log or container, and the other one either overwriting it or not being able to write at the same time.

Kind Regards,

1 Like