Continous Docker builds causing space issues


Our Docker builds in Jenkins are run with this command and options:

> docker build --rm=true -t <repo><image> --no-cache=true --pull=true --file=<dockerfile location>

So we’re specifying to not use the cache and remove intermediate containers. When running with these options we get a bunch of images with <none>:<none> for repo and tag that are fairly large in size and can be found when running ‘docker images -a’.

When I switch the option to use cache it doesn’t create additional images as it’s using the existing ones…this all makes sense.

I’m noticing when we run with --no-cache=true and it adds more images that increases the Data Space and Metadata Space used.

Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 8.13 GB
Data Space Total: 107.4 GB
Data Space Available: 40.78 GB
Metadata Space Used: 11.14 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.136 GB

I’m also noticing when running with --no-cache=true that the Docker build output shows several entries like below where the intermediate container 328a077e40b7 gets deleted but not 2e40b8143997. I’m assuming this deletion is because of --rm=true.

Step 19 : RUN mkdir -p /var/log/mariadb/
—> Running in 328a077e40b7
—> 2e40b8143997
Removing intermediate container 328a077e40b7

Why doesn’t the --rm=true parameters also delete the other images like 2e40b8143997? And what’s the diff between the intermediate container it deletes and the one it doesn’t?
Is our only longer term Docker build solution to save space to either stop building with --no-cache =true, or run a command to remove all images periodically (assuming we push them and don’t need them on that system anymore)?



Same issue here. I have a Jenkins job just for cleaning up old images.

@cliffom “I have a Jenkins job just for cleaning up old images.”

something you can share details of here?