We are running one of our application called JAMA in docker containers. Last week we faced any issue of disk space full for var/lib/docker directory specially for var/lib/docker/overlay it seems like there are many containers here. When we check disk space by using df -h command we can see that the main storage it is sufficient enough but when we run df -ih which display the Inodes we notice that the Inodes was fully utilized. I believe the Inodes is for the metadata.
What Have We tried
Tried removing dangling container
Tried docker prune
Our Query for Docker Guru’s Out There
Is there any way we can clean this directory?
Or it is we need to increase the disk space for this directory every time we hit the threshold for the storage.
Do we need to reinstall our docker to clear this space?
Any suggestion or insight are highly appreciated. Note that the current usage in the attache screenshot if after we increase another 20GB for this directory.
Are you aware that you are asking for help on a OS package, provided and supported by Redhat? Redhat usualy has opinionated modifications (usualy good ones) under the hood.
Docker Inc. only supports Docker-EE (as in installable and usable) on RHEL. Till RHEL 7.5. people could use a workaround to install the CentOS version. Though, this is highly unrecommended on production systems.
I hope you will find someone running that old version of Docker and beeing able to help your!
I would create a new partition and manually increase the number of inodes from the default setting. Then create a filesystem in the new disk partition with a mount point of /usr/local/docker. Setup this filesystem to mount during booting of the system. Then fsck and mount the new filesystem, /usr/local/docker.
stop docker
cp -rp /var/lib/docker /usr/local
Once done rm -rf /var/lib/docker
create a symbolic link, ln -s /usr/local/docker /var/lib/docker
start docker
The reason you are running out of inodes is because the number of small files (files less than 16 Kb) being created in your files system are more than an average user.
This should correct your incident of running out of inodes. The reason you are running out of inodes is because the number of small files (files less than 16 Kb) being created in your files system are more than an average user. You need to do the math. Start with the total disk space in your current filesystem and divide by the total number of inodes. You will get a number between 15 and 16. When you previously ran out of inodes, how much disk space was still available. Let say 50% of the space was still available. OK create your new partition and manually increase the number of inodes by 2 (2 times the default number of inodes). This corrects the inode incident.
You may want to read this thread for this known incident
Docker does not free up disk space after container, volume and image removal #32420 (https://github.com/moby/moby/issues/32420) One user stated: I stop the docker service nightly and rm -rf /var/lib/docker now, so at least it’s “stable”.
Thanks for sharing the steps metin. I believe the system owner at my end wont allow us to do such formatting now but We have a clear idea on what causes the issue and planning for a re installation of latest docker version as our current version seems to be one of the main reason for it. We are running a very old version as you mentioned earlier
You are correct. We have 2.7million of Inodes and 42g of total space for this directory. When we do calculation it is about 15.56kb per Inodes. Our research also conclude we are running into the known issue which was similar as in the shared thread above. The thing is we cannot do the same workaround as in thread as the system is almost 24 hours by the user monday to friday.
We also got reply from our application vendor (JAMA) to perform re installation which eventually upgrade our docker version which might entirely fix the issue from future occurrence but i will take note on your suggestion and test the same configuration in our test instance. Once we completed the action on test instance i willl update here so that future any other user had similar issue they have some reference.