docker rmi Not Freeing Up Space on Ubuntu 22.04 VM

Operating System: Ubuntu 22.04 LTS
Docker Version: 26.0.0, build 2ae903e
Docker Compose Version: v2.25.0

The /var/lib/docker directory currently occupies 21G of space. When deleting individual Docker images of a project in an Azure Virtual Machine using the docker rmi command, the space is not being freed up. However, using the docker system prune command does free up the space. Previously, the docker rmi command worked properly and released space without any issues. About two weeks ago, we consulted with the Azure team, and they advised us to check with Docker. I do not want to use the docker system prune command, as I may want to retain one or more previous images. I would like to resolve this issue so that using the docker rmi command removes the images and frees up the space as expected.

docker rmi or docker image rm just removes tags from images and the actual image is deleted only if the image has no other tag left. If the image you deleted was built on the same system, the size you think you deleted as image will go to the build cache, so it will not physically deleted. You could also have multiple layers and when docker rmi removes some layers, you could still leave some dangling images on the system. So while you delete some kilobytes for example, you wonā€™t necessarily notice significant difference on the filesystem.

docker system prune also removes dangling images and the build cache, but you can do it with docker image prune and docker buildx prune.

Thank you for the detailed explanation. I appreciate the insights! Iā€™ve been deleting unused images with <none> tags using the docker rmi command. Previously, after deleting these images, the space on the VM would free up immediately. However, now when I delete an image, it seems to move into the build cache instead of releasing the space directly. To clear this cache, Iā€™ve started using the docker builder prune command as you mentioned.

That said, I want to avoid using docker image prune or docker system prune because I have around 10 dangling images, and I want to delete only a few specific ones while retaining the others. In the past, using docker rmi was enough to remove specific dangling images and free up space.

Could you help me understand why this behavior has changed? Earlier, deleting untagged images directly freed up space, but now it seems to shift into the build cache instead. Is this a recent change in how Docker handles image layers or the build cache?

Iā€™m not aware of any change. Not everything can be completely if there is anything that uses it. If the dangling image is deleted and you see its size appearing in the build cache, then it was probably part of a built image which had a tag, which was deleted but the layer itself could not be deleted for example because a container was still using it, so it became dangling. Then you deleted the dangling, but it was still a layer from a previously built image so yo see the size in the build cache.

So either it worked differently before and I have no idea when it was changed, or you had different kind of dangling images before, like an image pulled from a registry instead of a built image.

We are building the image repeatedly on the same VM using the sudo docker-compose build command. Hereā€™s what I observe:

  • I currently have one latest image (1.47 GB) and 20 dangling (no tag) images, each of 1.47 GB. This totals around 31 GB of image data.
  • However, when I run sudo du -sh /var/lib/docker, it shows only 17G for /var/lib/docker.

When I delete a dangling image, 742.5 MB of space moves to the build cache. If I delete two dangling images using docker rmi, 1.47 GB of space shifts to the build cache.

This behavior suggests that the dangling images are dependent on other layers, as they were built using the same image cache on the same VM.

Regarding the type of dangling images, these were built images, not pulled from a registry. The repeated builds on the same VM using the cache confirm this.

Does this align with your understanding of how Docker handles image layers and caching in this scenario?

Yes. But du will not show you the actual size since the docker data root is a special filesystem and due to some references, some files will be counted multiple times. du -xsh could show you a more accurate number. And of course docker system df, but Iā€™m sure you know that if you know what is in the build cache and what is not. .

1 Like

Thank you for the explanation and for pointing out du -xshā€”it really helped clarify the discrepancy in disk usage. I now understand why deleting dangling images didnā€™t free up space; itā€™s due to shared layers being retained in the build cache.

Appreciate your insightsā€”it was very helpful!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.