I have set up some automated builds off some “docker” folders in my github repo, and they are working fine, but they rebuild the images every time I make a commit anywhere in my master branch, even if there’s no change to the /docker/myimage folder or /docker/myimage/Dockerfile.
This isn’t strictly a problem, but it does mean my images get rebuilt far more than necessary (not a problem for me, but your server is doing a lot of unnecessary work and repeatedly downloading the same things over and over), and any images that are “FROM” or linked to them also get rebuilt. It also makes it tricky to know what version of the image is running on each server because they all appear to have different imageIds, despite being identical. Is there any reason to trigger a rebuild on a github commit trigger if the monitored folder (ie. /docker/myimage) didn’t change? Are there situations where a change in a parent / sibling folder could alter the image build?
Obviously I could move my /docker folder into a different github repo as a workaround, but I’ve found it handy to have everything required to build & deploy my project in a single repo.
A second question is why don’t the automated builds appear to take advantage of caching like they do when I build images myself? When I do “sudo docker build .” usually it uses the cached image up to the point where my Dockerfile has changed. Since I have some steps that take >10mins (downloading lots of stuff), this makes the build process MUCH faster. Curious as to why this doesn’t seem to happen on a Docker Hub automated build?