Docker Community Forums

Share and learn in the Docker community.

Docker save/load performance

We ship our product exclusively as a collection of Docker images, some of which are rather large. Part of our build process docker save all of the images into a tar file, and then our product installer docker load that. Even with a fair bit of tuning to avoid duplicated large layers, this winds up being about 10 GB uncompressed and about 7 GB compressed.

Both the docker save and docker load steps tend to be really slow. Is there anything that can be readily done to speed them up?

We’ve essentially had this problem forever, but a typical setup calls docker load via Docker 1.8 on CentOS 7 with a 100 GB LVM volume as devicemapper Docker storage (on an AWS gp2 SSD EBS volume). Loading that 7 GB .tar.gz generally takes 20-30 minutes depending on the exact system. While the exact hardware setup can be out of our control, hints on better setups are still extremely useful.

What do iotop and top look like on the system while the docker load or docker save commands are running?

This does seem to be a bit slow:

>>> (7*1024.)/30/60
>>> (10*1024.)/30/60

In 30 minutes, it averages reading the compressed data at about 4mb/sec, and writing the uncompressed data at 5.6mb/sec. Is your cpu pegged during this operation?

Let’s run the experiment. Amazon m4.2xlarge (8 cores, 32 GB RAM) instance, CentOS 7, Docker 1.9.1. Set up with two gp2 EBS disks, a 32 GB root disk and a 100 GB disk dedicated to Docker devicemapper lvm. Ran docker-storage-setup to create the volume.

Meanwhile, I’ve built my docker save image tarball (11.4 GB uncompressed, 6.2 GB gzipped). 218 layers in total with the largest single layers being 2.7 GB, 1.4 GB, 1.4 GB (again), 1.0 GB, 0.5 GB.

While docker load is running:

  • 100% CPU being used by docker, about 40-60 MB/s write I/O by docker-untar; a kworker occasionally spends 100% of its time in iowait according to iotop; then
  • the system looks mostly idle, with some occasional bursts in iotop of iowait for kworker and docker daemon, but with a load average above 2.0; then
  • a docker-applyLayer process intermittently appears, when it does, getting 120-240 MB/s write; iotop’s top-line summary says 20-120 MB/s write is happening but it’s not obviously attributable to any process; top says a docker process is spending 0-40% CPU

A hair under 9 minutes total. (Which is less than I had initially said.) That’s an average of 20 MB/s uncompressed data.

That doesn’t “feel” bad to me when I look at it that way. Are there tuning things that would improve it?

That’s a lot of layers to deal with. Can you try and optimize your Dockerfile to reduce the number of layers?