The last time I looked into what the best storage setup for Docker was, the recommendation was devicemapper with a dedicated LVM partition; everything else was described as “immature” or “inconsistent support”. Is this still the right setup, or is one of the other core storage drivers better these days?
We run a very “ground-up” Docker setup on AWS. When we need a system to do something, we’ll usually do it by provisioning a clean EC2 instance with Ubuntu 16.04, partitioning disks, installing Docker, pulling our software stack, and baking it into an AMI, then launching actual systems from that AMI. We also have an ECS setup that’s using Amazon’s standard ECS AMI, but again with a custom boot-time partitioning setup. I think at this point we reliably have Docker 1.9 everywhere (and we might have Docker 1.11 if we rebuild the ECS system).
We hit a couple of “gremlins” in this every so often. The ECS system, which runs our CI tooling and so is frequently creating and destroying containers and images, routinely runs into corruptionish issues…well, not totally unlike this current forum post. The AMI-based systems, when they reboot with maybe 10-20 GB of local images, regularly hang up at a provisioning-time
docker images step (destroying and recreating the instance is the easiest recovery for us; manually logging into the system and restarting the daemon is more steps [yes, 5 steps vs. about 2], but works more reliably).
Both of these “smell” like storage backend issues. I feel like when the topic has come up on the forum before people say “devicemapper is bad”; is devicemapper/LVM still “bad”, and is something else better and recommended on a current system? (aufs? overlay?)