Docker on scalable storage (NFS, ceph, gluster)

Hi everyone.

My problem:
I have a small swarm cluster (6 nodes) each with 120GB disk space. Users in my network can build new images, tag them and store in my docker registry. Besides this they can create containers.

Recently I have encounter problems with nodes disk space, each swarm node disk is full and I have to manually remove old containers and images.

One solution is to buy bigger disks, but this is only temporary solution.

I’m looking for solution where all “/var/lib/docker” folders will be placed on some “infinite” (easily scalable and fast) share. That I don’t have to think or check if there is enough disk space to run container.

When ever this problem happens, It will be enought to add new disks or storage node.

Now I’m experimenting with NFS (synology) where each node has its own separate NFS folder, but I have realized that AUFS and overlay do not support NFS as baking file system. What are other storage options which works with NFS?

Has anyone try other solutions ceph, glusterfs or similar? Are there any production ready architectures where each docker stores its data in scalable storage? I will very grateful for any solutions based on yours experience

Thanks in advance.

1 Like

This doesn’t answer the original question, but if you’re using 1.13.0, have you looked into using docker system prune -f in a cron job?


Maybe it could help - I didn’t used it in production yet, but it was the best solution that I’ve found until now.

I do and this trick saved me a lot of head hecks!

Going back to the main question, I have a similar challenge. I wrote few user stories to help me understand the challenge.

EPIC: Container as an external hard drive:

As a DevOps hero, I want to:
launch a new server,
run a container (or a service) as an external hard drive,
have my applications consume this data like it always existed.

Read the user stories here -