We ship our product exclusively as a collection of Docker images, some of which are rather large. Part of our build process
docker save all of the images into a tar file, and then our product installer
docker load that. Even with a fair bit of tuning to avoid duplicated large layers, this winds up being about 10 GB uncompressed and about 7 GB compressed.
docker save and
docker load steps tend to be really slow. Is there anything that can be readily done to speed them up?
We’ve essentially had this problem forever, but a typical setup calls
docker load via Docker 1.8 on CentOS 7 with a 100 GB LVM volume as devicemapper Docker storage (on an AWS gp2 SSD EBS volume). Loading that 7 GB .tar.gz generally takes 20-30 minutes depending on the exact system. While the exact hardware setup can be out of our control, hints on better setups are still extremely useful.