You should start with “make it right” before you head into optimization.
If this Dockerfile is what you concluded after reading the documentation, I highly recommend to read it again…
This was a Dockerfile, I pieced together from a couple of examples I found on installing SQLite and Flask. And, it does work atleast for my test case. I admit I just started researching and learning Docker last friday. But okay, I will do more digging through the docs.
This is functioning for my small personal micro-project. But of course, learing the “right” was is what I am after.
The second sentence emphases what staged builds are actualy used for:
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
Multi staged builds are realy just to perform the heavy lifting, like compiling an application, into a previous stage, so that you can simply copy the compiled artefact(s) into your final image. Appart from that there is no coupling between the stages.
Now we come to the optimziation part.
Each instrution in the Dockerfile will create a new image layer. Thus try to chain as many commands as possible and COPY/ADD as man files in a single action as possible.
Your example transformed would look like this:
As long as you don’t want to implicitly extract a tar file during the ADD instruction, use its lightweight alternative COPY (which realy just copies). If possible create a subfolder and move all folder and files into this subfolder, this will allow to copy the content of the subfolder to the root of the container.