Log rotation inside container not letting go of files

Greetings,

It’s recently come to attention that java/node processes running inside containers are not letting go of deleted log files.

That is, is you run lsof | grep -i 'deleted' it shows the log files still held open by a process inside a container.

This is happening for both java and nodejs processes. These respectively use logback and winston for logging and logrotation. Both these solutions perform their own logrotation, no external program is used.

This stood out as used disk space did not match reported disk usage upon inspection.

Since then this has been tested outside of the container and it is not occuring.

The log files are created in a directory inside the container, which is also mounted so they can be seen outside the container.

Restarting the container release these files, though it shouldn’t happen in the first place of course.

Googling for this only gives results for questions regarding the logs of docker itself.

Can anyone shed some light on this? What could be causing these files to not be let go?

Observed on:

Ubuntu 16/18/20

Docker 18/19/20

All installs use overlay2

@kdgdev did you ever resolve this?

I’ve got the same issue … using node-red in docker container with Winston for logging.

df shows > 250GB space used whereas du shows < 30GB space used. lsof | grep -i 'deleted' shows huge amounts of the rotated log files still held open, even though they no longer exist on disk.

And as you also mention, restarting the container immediately frees them all and returns df back to normal.

No, still happening, though less. Controlled restarts of containers are required.