Some way to clean up / identify contents of /var/lib/docker/overlay

Thanks,
I can confirm that the solution of deleting the logs files works:

#to remove all log files
find /var/lib/docker/containers/ -type f -name “*.log” -delete

Do not forget to restart the docker containers:

docker-compose down && docker-compose up -d

or Reboot the server to complete the clean-up process

shutdown -r now

50

Worked like a charm for me! Thank u! :vulcan_salute::love_you_gesture:

to identify what overlay belongs to what image, container you may use: https://gist.github.com/epcim/cbe1e51b1f8ae011d84ce7a754401398

1 Like

I can confirm that in my case docker system prune --all --volumes --force fixed the problem. Running the interactive docker system prune didn’t help. Either the force flag was needed or the --volumes (not sure which).

Thanks I’ll add that to the clean up list and see what happens. It’d still be nice to be able to map stuff from the overlay dir to what “owns” it though.

This is a handy little one-liner I came up with to identify which image(s) own a particular folder in the overlay2 directory:

for I in $(docker image ls |grep -v IMAGE |awk '{print $3}' |sort |uniq); do F=$(docker image inspect $I | grep "ed420aa193d1533d2be0b6799af7434805b990ea963c7ae282ae067dbd1f2b95"); if [ -n "$F" ]; then echo $I; fi; done

That gives you the hash that you can grep for in a docker image ls

Actualy you can get all the details from docker inspect.

This oneliner lists the mapping of overlay2 folders to exact RepoDigest information (this helps to distinguish folders even for mutable tags like “latest”)

docker image inspect $(docker image ls -q)  --format '{{ .GraphDriver.Data.MergedDir}} -> {{.RepoDigests}}' | sed 's|/merged||g'

I found out that easiest way to clean up that directory (that in my case grew to 52GB in about 2 months) was to clean up the builder cache by issuing a:
docker builder prune

If you want to go one step further, use:
docker builder prune --all

Docs: https://docs.docker.com/engine/reference/commandline/builder_prune/

2 Likes

clean log file to clean up 10GB for me :stuck_out_tongue:

truncate -s 0 /var/lib/docker/containers/*/*-json.log

You may need sudo

sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
1 Like

I’m running a typical homeserver with plex, nextcloud, pihole and a couple of other stuff in docker. I just tried out some steps to reduce the disk usage of docker and I thought you might be interested.

after running docker system prune -a the usage looked like this:

user@Docker:/var/lib/docker$ sudo du -h -d 1
[sudo] password for marius: 
3.2G	./vfs
20K     ./builder
20K	    ./plugins
13G	    ./overlay2
72K	    ./buildkit
4.0K	./trust
148K	./network
2.2G	./volumes
28M	    ./image
4.0K	./tmp
4.0K	./runtimes
132M	./containers
4.0K	./swarm
18G	.

after docker-compose down for every container and pruning again:

3.2G	./vfs
20K     ./builder
20K	    ./plugins
1.3G	./overlay2
72K	    ./buildkit
4.0K	./trust
148K	./network
2.2G	./volumes
16M	    ./image
4.0K	./tmp
4.0K	./runtimes
360K	./containers
4.0K	./swarm
6.6G	.

Then I removed /var/lib/docker (of course I saved /var/lid/docker/volumes beforehand and copied it back aftewards) and brought all my container back up. The disk usage now looks like this:

16K	    ./plugins
11G	    ./overlay2
88K	    ./buildkit
4.0K	./trust
128K	./network
2.2G	./volumes
14M	    ./image
4.0K	./tmp
4.0K	./runtimes
712K	./containers
4.0K	./swarm
13G	.

So it looks like a docker system prune -a does not get rid of all unused files as the usage now is quite a bit lower that it was at the beginning with the same containers running

“docker system prune -a” freed up disk space from /var/lib/docker/overlay2
“docker volume rm $(docker volume ls -qf dangling=true)” freed up disk space from /var/lib/docker/volumes

I had the same issue. These two commands worked for me:

“docker system prune -a” freed up disk space from /var/lib/docker/overlay2
“docker volume rm $(docker volume ls -qf dangling=true)” freed up disk space from /var/lib/docker/volumes

1 Like

it works, finally found the root cause, application logging files kept inside and keep growing.
thanks

miracle worker! No reboot needed! 22 gigs saved

Hi All!
This did the trick for me:
docker builder prune
Total reclaimed space: 45.86GB

1 Like

Just freed 213Gb… Thank you for advice

Total reclaimed space: 9.126GB
Thank you

Docker by default does not limit the log file size, a small docker at my company runs over a year and accumulates 70Gb of log and blows up our disk. Instead of deleting the log file and doing some magic tricks, you should always keep the log file small. I list below some method to keep your Docker’s log managable.

  1. If you use docker-cli, add --log-opt to your docker run command:
    --log-opt max-size=10m --log-opt max-file=5

  2. If you use docker-compose, add the logging configuration:

<service_name>
  logging:
	max-size: "10m"
	max-file: "5"
  1. If you are lazy, you can add the default logging configuration to the Docker daemon /etc/docker/daemon.json, but make sure to setup for a new machine:
{
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  }
}

Reference: Configure logging drivers | Docker Documentation

@dattran2346 Just something to note - the docker-compose docs are showing the max-size and max-file as being under the options block.

<service_name>
	logging:
		options:
			max-size: "10m"
			max-file: "5"

I had the same error and this command worked perfectly:
docker system prune --all --volumes --force
Total reclaimed space: 82.5GB
Thanks!

I must add that it will remove everything, including volumes. Run it only when you don’t have anything to keep or you have a backup already. I guess you know that, but other Docker users may try the commands they find (not a good idea though) and they lose everything.

1 Like