I can confirm that in my case docker system prune --all --volumes --force fixed the problem. Running the interactive docker system prune didn’t help. Either the force flag was needed or the --volumes (not sure which).
Thanks I’ll add that to the clean up list and see what happens. It’d still be nice to be able to map stuff from the overlay dir to what “owns” it though.
This is a handy little one-liner I came up with to identify which image(s) own a particular folder in the overlay2 directory:
for I in $(docker image ls |grep -v IMAGE |awk '{print $3}' |sort |uniq); do F=$(docker image inspect $I | grep "ed420aa193d1533d2be0b6799af7434805b990ea963c7ae282ae067dbd1f2b95"); if [ -n "$F" ]; then echo $I; fi; done
That gives you the hash that you can grep for in a docker image ls
Actualy you can get all the details from docker inspect.
This oneliner lists the mapping of overlay2 folders to exact RepoDigest information (this helps to distinguish folders even for mutable tags like “latest”)
docker image inspect $(docker image ls -q) --format '{{ .GraphDriver.Data.MergedDir}} -> {{.RepoDigests}}' | sed 's|/merged||g'
I found out that easiest way to clean up that directory (that in my case grew to 52GB in about 2 months) was to clean up the builder cache by issuing a: docker builder prune
If you want to go one step further, use: docker builder prune --all
I’m running a typical homeserver with plex, nextcloud, pihole and a couple of other stuff in docker. I just tried out some steps to reduce the disk usage of docker and I thought you might be interested.
after running docker system prune -a the usage looked like this:
Then I removed /var/lib/docker (of course I saved /var/lid/docker/volumes beforehand and copied it back aftewards) and brought all my container back up. The disk usage now looks like this:
So it looks like a docker system prune -a does not get rid of all unused files as the usage now is quite a bit lower that it was at the beginning with the same containers running
“docker system prune -a” freed up disk space from /var/lib/docker/overlay2
“docker volume rm $(docker volume ls -qf dangling=true)” freed up disk space from /var/lib/docker/volumes
I had the same issue. These two commands worked for me:
“docker system prune -a” freed up disk space from /var/lib/docker/overlay2
“docker volume rm $(docker volume ls -qf dangling=true)” freed up disk space from /var/lib/docker/volumes
Docker by default does not limit the log file size, a small docker at my company runs over a year and accumulates 70Gb of log and blows up our disk. Instead of deleting the log file and doing some magic tricks, you should always keep the log file small. I list below some method to keep your Docker’s log managable.
If you use docker-cli, add --log-opt to your docker run command: --log-opt max-size=10m --log-opt max-file=5
If you use docker-compose, add the logging configuration:
I must add that it will remove everything, including volumes. Run it only when you don’t have anything to keep or you have a backup already. I guess you know that, but other Docker users may try the commands they find (not a good idea though) and they lose everything.