How to keep Docker builds from eating up all your disk space?

When running builds in a busy continuous integration environment, for example on a Jenkins slave, I regularly hit the problem of the slave rapidly running out of disk space due to many Docker image layers piling up in the cache. The only solution I have right now is to delete the image right after I have built and pushed it: docker rmi -f <my image>.

Q 1. Documentation for the -f option says “Force removal of the image”; just to clarify, does this mean delete the image even if it’s in use by a container, or delete the image even if another Docker build is using it?

Q 2. Is there a way to set a max size on the cache, to stop it eating up all my disk space? Deleting images right after I build them means I cannot utilize the cache for build optimization.

Q 3. If I cannot restrict the cache size, what other options do I have for pruning it? I’ve experimented with running a cronjob every so often that deletes images older than X days etc., but I don’t know how safe this is: in relation to Q 1, what happens if I delete an image from the cache while another Docker build is using?

The cronjob method is not without problems either way, as it is possible for the cache to max out my disk in between runs of the clean-up. I’ve been scratching my head trying to figure out the best solution to this, surely others have solved this problem before!?

Thanks!

1 Like

This is what I do (with admittedly mixed success). If your image labels have some structure, you can write a more intelligent script that deletes all but the most recent one. I feel like docker rmi is “usefully safe”: it won’t remove an image that’s backing an extant container (running or otherwise), and it won’t delete actual image content that’s a base layer for other images (though it will remove a tag).

If docker rmi tries to remove a layer that’s potentially a cached base layer for part of a docker build, I’d expect it to behave reasonably: if the rmi happens first then the cached base layer will go away and it will need to be rebuild; if the build step happens first then the layer will be shared across multiple images and will be preserved. I admit I haven’t tested this, though.

(I never use docker rmi -f. This probably goes with never using docker images -a, though those are pretty different commands: they expose/affect middle-level details of Docker that you rarely actually need.)

Just came across this guide for automatically pruning images and containers that are older than X days. Would be nice to be able to set a disk space limit in Docker Desktop and have it clear out the least recently used.

2 Likes