Best Practice: Manage Docker Images

Hi gangs,

We have a in house docker build machine which builds 20-ish docker images per day. Each image will have 3 different tags and pushed to 2 different registry. The problem is that, when you run “docker images” on this build machine, you get tons of images listed. And also, if you run “df -i” you will find those images eat a lot resources on disk. What is the best way to manage docker images locally? I can not simply delete old ones as we have to keep at least 3 months of builds…

One thing we do is to archive our image builds using “docker save” to a tar file. We save each image as a tar,
along with the Dockerfile and other dependencies. This allows you to remove the images locally
from your build machine using “rmi” once you move to newer versions.

You can always recreate the older images from Dockerfile and dependent files, OR, simply “load” the
image from the tar file.

Thanks!
I was thinking about this approach but the downside is that we have a very
big base image (almost 1GB) and our code is fairly small. Given that we are
building image per check in, we will be end up having a lot of huge tar
files. However, two tar files may only have a few lines of code difference.
But if we don’t have any other solutions, we will definitely go with this
approach.

Yes, another common complaint about Docker images … the size.

One trick we use is to put shared modules, e.g. the JRE or client libraries into a separate
container, exposing the file as a VOLUME in the Dockerfile.
You create the container from this image but don’t “run” it.

Then other containers can simply reference the JRE (and anything else) as a "code block"
by using the --volumes-from docker option … this can dramatically reduce the size of your images,
making your tar files smaller, and also reducing the time to update images on target systems