This!
for i in
find /var/lib/docker/containers/ -type f -name “*.log” ; do > $i ; done
because I don’t care about the logs
Share and learn in the Docker community.
This!
for i in
find /var/lib/docker/containers/ -type f -name “*.log” ; do > $i ; done
because I don’t care about the logs
Thanks for taking the time to share this, you’re a lifesaver. Managed to free up 117GB of space from the logs that I really didn’t need.
Total reclaimed space: 248.6GB
Thanks
In my case gitlab-runner for building docker images, I remembered a problem I had when building images where docker should have been rebuilding the images after I destroyed the older images. But it still locks a lot of cache (intermediary images), not caring about their final image being removed.
docker builder prune
Causes clean up of:
Total: 29.01GB
This is the solution and it didn’t seem to break anything (as far as I noticed).
it worked thanks. reclaimed space 80GB
I encountered a similar problem where /var was reaching a 99% mark along with overlay2. This problem in our system is due to the messages file in the /var/log directory, when you delete that file and stop or restart the rsyslog you will clear most of the taken space from /var and overlay2 where the docker prune was not working. This is most likely a misconfiguration in our system but you can try, it could solve your problem.
If anyone still having issues, this article is helpful Optimizing Docker Storage
I was able to claim with
docker system prune -a -f
Been running docker for a couple years w/o cleanup, variable number of containers, but typically just over 80. 45 TB drive got up to 95% full, when looking for logs and things to clean up…saw the MASSIVE size of overlay2. Searching brought me here. No amount of the pruning, nor any other friendly cleanup method suggested significantly worked other than cleaning a few gig. Ended up doing the following:
Ended up recovering 23 TB of the near 45 TB consumed.
So, this worked for me, notable downtime, but less than an hour. I can accept this process, I’ll add to the list of things to do when upgrading the OS to a new version (seems like a good time to do it).
But a couple things that bother me:
I had NO idea this was a thing. We’ve been watching the RAID grow over the years and had assumed it was legit data. We’d even purchased drives to upgrade the RAID from 45TB to 132TB. How do we get the word out that this “feature” of Docker really sucks?
How can I monitor what is “bloat” and what is “legit” data? I can’t seem to figure out way to distinguish between the two.
Thanks y’all. And great thread, I’m glad I found it, saved me about $3000.
If you are using an old Docker version, who knows what bug it had, but let’s assume there was no bug and maybe you even kept Docker up to date.
docker system prune -a -f
removes only “unused data” including containers, images. By adding the --volumes
flag you can also remove anonymous volumes, which can be done by another suggested command docker volume prune -f
, but it both keep named volumes. Those are volumes you may want to reusae later even if you deleted the container, that’s why you assigned a name to those.
It is hard to tell now what it was caused by. Bugs can happen, but most of the times there is an explanation even if it is sometimes goes beyond well-known and well-documented cases.
Depends on what you mean by legit, but I don’t know any tool that tells you that. If you have a monitoring system which alerts you when the used disk size increases beyond a limit without any obvious reason, you can investigate, but if your monitoring system monitors volume sizes, container filesystems and so on, you can find out if something shouldn’t be as big. If you want to recognize corrupted docker data dir, that is a hard one. It’s like a corrupted disk which can be saved by a normal user, but specialists might be able to recover the data. You or a tool would need to know exactly how the Docker data root works, which file stores what in what format, refers to what and so on. Then follow the references, save the filenames and eventually show files only that were not referred to anywhere. It is actually not impossible so maybe someone has already done it. I don’t know.
good,Perfectly solved my problem
Using this command to displays information regarding the amount of disk space used by the docker daemon.
docker system df -v
Then using this command to clean the builder cache
docker builder prune
Try this for a change:
docker build prune
Absolutely nothing helps me from this topic before I found this script, and it will not affect running containers, and will actually clear disk space
so:
sudo du /var/lib/docker/ -h --max-depth=1
df -h
wget https://gist.githubusercontent.com/fayak/3a438426a906d9b85b68bc38ead6d5bb/raw/122d9d1e69d1d7c61e89424a9c71322eb415ea1a/docker-pruner.sh
chmod +x docker-pruner.sh
sudo ./docker-pruner.sh
sudo ./docker-pruner.sh marker
sudo ./docker-pruner.sh check
sudo ./docker-pruner.sh clear
sudo du /var/lib/docker/ -h --max-depth=1
df -h
For me, following helped. Overlay2 was using whooping 226G.
sudo systemctl stop docker
sudo chmod 777 /var/lib/docker
sudo chmod 777 /var/lib/docker/overlay2
sudo rm -rf /var/lib/docker/overlay2/*
sudo systemctl restart docker
Update: I also had to cleanup /var/lib/docker/image/overlay2/layerdb/sha256/* otherwise it may fail for previously used images
Hello,
For me this worked:
delete all images from docker
run “docker system prune -a”
add the deleted images again
it works, 25gb faded away
Hi all.
I have the same issue, my small docker server was out of space and df shows 100% usage, like
zzz@dovpn:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 96M 9.6M 87M 10% /run
/dev/vda1 25G 25G 0 100% /
tmpfs 479M 0 479M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda15 105M 6.1M 99M 6% /boot/efi
overlay 25G 25G 0 100% /var/lib/docker/overlay2/%ID%
tmpfs 96M 4.0K 96M 1% /run/user/1000
So, i’ve found this thread. But all commands like
docker builder prune
docker system prune -a
docker system df -v
Gives me about zero savings and about zero info. I still had out of space state. My next suggestion was to stop docker service and look around. And all my space was vasted by system logs.
du -hd2 /var/log/* | sort -rh | head
Shows me some big logs which eats my disk. So the solution was to truncate | disable logs and that is. For example, in my case
sudo truncate -s 0 /var/log/{syslog,kern.log}
Hope it helps someone.