Is there good practice to ship and update C++ binaries with a docker image without pulling the whole image?

Hello,
I am using docker on linux to compile and ship C++ applications.

Is there good practice to ship and update C++ binaries with a docker image without pulling the whole image?

Today my process is as follow:

In a docker file I describe all dependencies required by my C++ application with something like this:

FROM Ubuntu 16.04
RUN apt-get install libboost-dev && apt-get install libopencv
etc …

Then I use this first image (the build image) to compile my application by sharing my source code on the host with the container thanks to a volume. The generated binaries are stored on the host.

Then I create a second image (the run image) from the first one (build), adding to this image my binaries and creating an entry point that will execute these binaries.

Finally, I pull on my customer computer’s the run image (the second one), and use docker compose to run it.
Now if I have an update on my source code, I re-generate the second container, and perform a “docker pull” from my customer computer’s, and I can see that only the last layer is downloaded.

Everything works well.

But there is a use case in which my customer can’t connect the computer to the Internet. He can only plug an USBKey on it.

What I would like is the ability to send only the last layer of the “run” image to my customer, he copies it on the usb key. Then, a script on the “unconnected” computer will update the image with the new last layer.

It seems that it is not possible to copy only one layer from an image (even if all descendent images are the same). Is it right?

Do you know a trick (or a third party software) that could help me to update my code without shipping the whole image?

You should look into multi-stage builds, which help this specific process along, but require a relatively recent Docker. Or, you can create the second image from a base image and add your binaries to it. (Read the link, it has examples of both techniques.) Either way, you should be able to get the second (run) image to contain some base OS plus your binaries, but not any of the build toolchain, and it’s likely to be substantially smaller.

You probably need to use docker save and docker load in this case, to get a working copy of the Docker image on to the removable media. (You probably know that.)

I don’t believe this is possible, no.

(A year or so ago I remember looking at the guts of the docker save tarball format and concluding it might be physically possible but not actually building the tool to attempt it; I also feel like some of the Docker infrastructure has gotten a little trickier in this time.)

1 Like