Hello,
I am using docker on linux to compile and ship C++ applications.
Is there good practice to ship and update C++ binaries with a docker image without pulling the whole image?
Today my process is as follow:
In a docker file I describe all dependencies required by my C++ application with something like this:
FROM Ubuntu 16.04
RUN apt-get install libboost-dev && apt-get install libopencv
etc …
Then I use this first image (the build image) to compile my application by sharing my source code on the host with the container thanks to a volume. The generated binaries are stored on the host.
Then I create a second image (the run image) from the first one (build), adding to this image my binaries and creating an entry point that will execute these binaries.
Finally, I pull on my customer computer’s the run image (the second one), and use docker compose to run it.
Now if I have an update on my source code, I re-generate the second container, and perform a “docker pull” from my customer computer’s, and I can see that only the last layer is downloaded.
Everything works well.
But there is a use case in which my customer can’t connect the computer to the Internet. He can only plug an USBKey on it.
What I would like is the ability to send only the last layer of the “run” image to my customer, he copies it on the usb key. Then, a script on the “unconnected” computer will update the image with the new last layer.
It seems that it is not possible to copy only one layer from an image (even if all descendent images are the same). Is it right?
Do you know a trick (or a third party software) that could help me to update my code without shipping the whole image?