When running builds in a busy continuous integration environment, for example on a Jenkins slave, I regularly hit the problem of the slave rapidly running out of disk space due to many Docker image layers piling up in the cache. The only solution I have right now is to delete the image right after I have built and pushed it:
docker rmi -f <my image>.
Q 1. Documentation for the
-f option says “Force removal of the image”; just to clarify, does this mean delete the image even if it’s in use by a container, or delete the image even if another Docker build is using it?
Q 2. Is there a way to set a max size on the cache, to stop it eating up all my disk space? Deleting images right after I build them means I cannot utilize the cache for build optimization.
Q 3. If I cannot restrict the cache size, what other options do I have for pruning it? I’ve experimented with running a cronjob every so often that deletes images older than X days etc., but I don’t know how safe this is: in relation to Q 1, what happens if I delete an image from the cache while another Docker build is using?
The cronjob method is not without problems either way, as it is possible for the cache to max out my disk in between runs of the clean-up. I’ve been scratching my head trying to figure out the best solution to this, surely others have solved this problem before!?