main part of the build process is running an installation script which clones a repository from github.
Now in case that github repo was updated my build process still uses the existing layers when I run my docker build which results into an identical image as outcome, doesn’t it?
Is there any way to force rebuilding existing layers so the installation script is being executed and therefore I actually have an updated image of the application I am want to run?
by default I only would get something new if the base image (debian:8 in my case) was updated and I pulled that before the build I think
docker build --no-cache is probably what you’re looking for here.
I almost always use a two-phase build process, where things like source code that can change get pulled in outside of the docker build process and COPYed in. Then if it hasn’t actually changed, you get the “fast” cached build sequence, and if it has, whatever rebuilding is necessary gets done.
that seems like the smartest solution. So build actually ‘understands’ whether what’s COPY’d from outside is actually different then what’s already in the cache? … sounds pretty clever