Hi everyone.
My problem:
I have a small swarm cluster (6 nodes) each with 120GB disk space. Users in my network can build new images, tag them and store in my docker registry. Besides this they can create containers.
Recently I have encounter problems with nodes disk space, each swarm node disk is full and I have to manually remove old containers and images.
One solution is to buy bigger disks, but this is only temporary solution.
I’m looking for solution where all “/var/lib/docker” folders will be placed on some “infinite” (easily scalable and fast) share. That I don’t have to think or check if there is enough disk space to run container.
When ever this problem happens, It will be enought to add new disks or storage node.
Now I’m experimenting with NFS (synology) where each node has its own separate NFS folder, but I have realized that AUFS and overlay do not support NFS as baking file system. What are other storage options which works with NFS?
Has anyone try other solutions ceph, glusterfs or similar? Are there any production ready architectures where each docker stores its data in scalable storage? I will very grateful for any solutions based on yours experience
Thanks in advance.