Docker Community Forums

Share and learn in the Docker community.

Compress multiple container images with the same base image

We build multiple container images: image1.gz, image2.gz, … imageN.gz. And we would like to deliver all the images over the network to many machines. However, as the number of images increase (N > 10), the overall image size become the burden to the network. So is it possible to compress multiple container images into a smaller overall binary (or multiple binaries)? And on the machine, it can extract and get back all the containers?

All of our images are built with the same base image. When docker loads the images, I believe it takes advantage of the OverlayFS to reduce the overall disk space. Is there something similar we can do to “compress” the *.gz images to reduce the overall size over the network? Thanks.

Good morning,
why transfer the images as gz-files to your destinations? Let the destinations pull the images by themselve - any only the needed layers of these images.
To have our images at a central place and have the docker-servers only pull needed image-layers we have set up our own (company-internal) registry.
So I can create images on my dev-machine, test them, tag them (with :latest and with the current timestamp - so I can go back to older versions if needed) and then push them to the registry - only layers not in the registry yet are transferred over the network.
On the productive docker servers I than can pull the updated images. And again only layers not on the productive docker server are transferred over the network.
I have configured the registry so that only atuhenticated clients are able to access - or you can use firewall-rules for preventing access from unwanted servers.

I’ve used this “HowTo” for creating the registry (it is a docker image => container): https://www.digitalocean.com/community/tutorials/…
But there are many other HowTos out there in the internet :wink:

Maybe this is an idea for you, too?

2 Likes

Generally the approach @matthiasradde drafted is the way to go! Running (docker/oci) containers without a private registry is painfull - even worse if you run a multi node setup based on docker swarm or kubernets

If one of your networks is in an air-gapped environment, adding all images to the docker save command actualy does what you are looking for: it will export each image layer only once. Additionaly you can compress the tar with gzip to archive a smaller file:

docker save myimage1:latest myimage2:latest myimage3:latest | gzip > myrelease_latest.tar.gz

transport the files from the source network to the target network, then import the images into the local image cache of a docker node with:

docker load < myrelease_latest.tar.gz

Especially if the environment is air-gapped, you definitly want to run your own registry and push all transported images from the local image cache of the node you imported the images to.

Btw. with just a bunch of lines of bash, you can easily parse images of a docker-compose.yml and use it to render the list of images to export.

If I got the air-gapped part wrong, please ignore my post and stick to what @matthisradde recommended in his post.

Thanks a lot. Both solutions are worth considering. The docker registry solution is the recommended way. But it will require some changes in our infrastructures. Your solution can solve our issue in a short term.

Welcome!

Though, I recommend to introduce a container registry as soon as possible. Plenty of open source container registries are available:
– Standalone: Harbor, JFrog Container Registry
– Build-in: Nexus3, JFrog Artifactory, Gitlab

All of those support Authentification ootb, Nowadays everything is labeled as “for Kubernetes”, but those should run on plain docker or docker swrm as well.