I’ve developed a script that builds and pushes images to our private registry. This setup allows me to track which layers have been added or removed between builds — for example:
On a server connected to the registry, updating the image is seamless — only the new layers are pulled, which typically amounts to just a few megabytes. However, I also need to update a second server that is completely offline. Currently, the only viable method is to use docker save, which exports the entire image, often tens of gigabytes. This is extremely inefficient and impractical — unless I physically fly over with a USB stick.
What I’m looking for is a way to export only the modified layers as .tar files, and then apply them on the offline server — ideally by removing outdated layers and injecting the new ones. This would reduce the transfer size dramatically (e.g., ~20MB instead of 20GB), and make offline updates manageable.
From what I can tell, Docker doesn’t currently support this kind of granular layer export/import. Is there any existing tooling or roadmap feature that could help with this? Even a low-level workaround would be appreciated.
Thanks for your time — and for all the great work on Docker.strong text
Most of us are not part of the Docker team if you meant it as Docker staff, but the community can still help you with ideas. If you want to directly recommend features to Docker, you can do it in the roadmap where you can also search for existing tickets.
You would still need to pull the images once and store it in your local registry, but you would still need to access your local registry from both machines and I’m not sure how you could restrict what images the second machine can access so it doesn’t pull images that are not pulled by the first. I never had to configure it yet myself.
But you can also push the images manually to a local registry so you would push the new layers only, pull it on the second machine and retag the images if needed. Of course it still assumes your second server can at least access one local server even if not the internet.
If it is so offline that has no access to anything at all, but you can access it from the first machine, you could use the offline machine as local registry. It would still mean you store the images in a registry and also in the docker data root. It would just make things faster by dealing only with new layers.
If you want to recommend to implement a feature that allows you to use a Docker host as a registry of another, I encourage you to do it in the roadmap I linked
I went ahead and created a feature request on the Docker roadmap to suggest docker save layer / docker load layer for exporting/importing layers directly. I think it could help a lot with truly offline scenarios.
In parallel, I’ll also look into skopeo, since it seems closer to what I’m trying to achieve in the short term.