Some way to clean up / identify contents of /var/lib/docker/overlay

clean log file to clean up 10GB for me :stuck_out_tongue:

truncate -s 0 /var/lib/docker/containers/*/*-json.log

You may need sudo

sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
1 Like

Iā€™m running a typical homeserver with plex, nextcloud, pihole and a couple of other stuff in docker. I just tried out some steps to reduce the disk usage of docker and I thought you might be interested.

after running docker system prune -a the usage looked like this:

user@Docker:/var/lib/docker$ sudo du -h -d 1
[sudo] password for marius: 
3.2G	./vfs
20K     ./builder
20K	    ./plugins
13G	    ./overlay2
72K	    ./buildkit
4.0K	./trust
148K	./network
2.2G	./volumes
28M	    ./image
4.0K	./tmp
4.0K	./runtimes
132M	./containers
4.0K	./swarm
18G	.

after docker-compose down for every container and pruning again:

3.2G	./vfs
20K     ./builder
20K	    ./plugins
1.3G	./overlay2
72K	    ./buildkit
4.0K	./trust
148K	./network
2.2G	./volumes
16M	    ./image
4.0K	./tmp
4.0K	./runtimes
360K	./containers
4.0K	./swarm
6.6G	.

Then I removed /var/lib/docker (of course I saved /var/lid/docker/volumes beforehand and copied it back aftewards) and brought all my container back up. The disk usage now looks like this:

16K	    ./plugins
11G	    ./overlay2
88K	    ./buildkit
4.0K	./trust
128K	./network
2.2G	./volumes
14M	    ./image
4.0K	./tmp
4.0K	./runtimes
712K	./containers
4.0K	./swarm
13G	.

So it looks like a docker system prune -a does not get rid of all unused files as the usage now is quite a bit lower that it was at the beginning with the same containers running

ā€œdocker system prune -aā€ freed up disk space from /var/lib/docker/overlay2
ā€œdocker volume rm $(docker volume ls -qf dangling=true)ā€ freed up disk space from /var/lib/docker/volumes

I had the same issue. These two commands worked for me:

ā€œdocker system prune -aā€ freed up disk space from /var/lib/docker/overlay2
ā€œdocker volume rm $(docker volume ls -qf dangling=true)ā€ freed up disk space from /var/lib/docker/volumes

1 Like

it works, finally found the root cause, application logging files kept inside and keep growing.
thanks

miracle worker! No reboot needed! 22 gigs saved

Hi All!
This did the trick for me:
docker builder prune
Total reclaimed space: 45.86GB

1 Like

Just freed 213Gbā€¦ Thank you for advice

Total reclaimed space: 9.126GB
Thank you

Docker by default does not limit the log file size, a small docker at my company runs over a year and accumulates 70Gb of log and blows up our disk. Instead of deleting the log file and doing some magic tricks, you should always keep the log file small. I list below some method to keep your Dockerā€™s log managable.

  1. If you use docker-cli, add --log-opt to your docker run command:
    --log-opt max-size=10m --log-opt max-file=5

  2. If you use docker-compose, add the logging configuration:

<service_name>
  logging:
	max-size: "10m"
	max-file: "5"
  1. If you are lazy, you can add the default logging configuration to the Docker daemon /etc/docker/daemon.json, but make sure to setup for a new machine:
{
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  }
}

Reference: Configure logging drivers | Docker Documentation

@dattran2346 Just something to note - the docker-compose docs are showing the max-size and max-file as being under the options block.

<service_name>
	logging:
		options:
			max-size: "10m"
			max-file: "5"

I had the same error and this command worked perfectly:
docker system prune --all --volumes --force
Total reclaimed space: 82.5GB
Thanks!

I must add that it will remove everything, including volumes. Run it only when you donā€™t have anything to keep or you have a backup already. I guess you know that, but other Docker users may try the commands they find (not a good idea though) and they lose everything.

1 Like

That find command helped me to find my problem: /tmp full inside the container.

The solution for me was to empty it:

docker exec -it <container> bash -c 'rm -rf /tmp/.*'

Thank you!

To investigate you may need enter inside the container and run du.

My problem was that /tmp inside the container was full.

The solution for me was to empty it (run outside the container):

docker exec -it <container> bash -c 'rm -rf /tmp/.*'

Warning: this solution may not work for every case, and some file on /tmp can be in use. Try with caution.

I tried

docker system prune -a

No luckā€¦
Checked my log sizes

du -shc /var/lib/docker/containers/*/*.log

they were already fine since I use

logging:
		options:
			max-size: "10m"
			max-file: "5"

then checked my overlay2 folders

du -shc /var/lib/docker/overlay2/*/diff

found 182G being used ! So I identified the major offenders:

du -s /var/lib/docker/overlay2/*/diff |sort -n -r 

Top one had 167G for a single folder, ouch.
Then linked the overlay2 folder back to the container with

docker inspect $(docker container ls -q) |grep ā€˜FOLDERIDHEREā€™ -B 100 -A 100

Now that I have identified the container went and did a docker-compose down;docker-compose up on it.
Overlay2 folder was cleaned up and I am 167G richer!

That command does not touch running containers.

This is where the overlay layers are. Including the container layers.

If you have logs inside the containers and not just the standard output and error stream written into files on the host, that can use a lot of spaces. Or as @sonysantos wrote, the /tmp folder can have a lot of data as well. Of course this is not limited to the tmp. Your apps can have other temporary folders too or folders you didnā€™t know about but not temporary. So if you donā€™t know what used the space, you can lose data. The du command suggested by sonysantos is useful. You can run it in the container like this:

du -sh /*

and see which folder has the most of the data. Be careful, becuse in case of small files it can take a very long time to calculate the size and you can get permission denied issues as well.

You can also inspect the container with docker container inspect and go to the folder of that container (at least on Linux) and use du orncdu from your host. ncdu is a more convenient way to browse the files interactively, se the sizes of folders and remove what you donā€™t want to keep. You can also exclude external drives so it would not calculcate the sizes on a bind mounted large disk.

It deletes the containers which also means deleting the filesystem layer of the deleted containers

Did you know that you could also use this command to show the container sizes?

docker container ls --size
1 Like

docker system prune -af works fine for me when Iā€™m using the default docker daemon configuration. But when docker/overlay2 is configured from /etc/docker/daemon.json to another path from system, this is not working anymore and I need to manually delete the folder content. Have anyone experienced a similar issue? Iā€™m using buildkit and the cache is huge.

Thanks, It works great! :grinning:

I was able to find that it was my logs causing it with sudo df -h, after that I fixed log size issues first on container level and once I saw positive change on a global level (system). I created daemon.json according to this article:

Effectively limiting the size and number of my logs. Once these settings were applied I approximately freed up 122GB of disk space. Hope this helps.

1 Like