File not written to bind mount until container ends or running "ls" inside container

I’ve tried my best searching for this issue but I can not find anything that matches this problem.

On the VM that docker is running on there is a directory called /mnt, inside that directory there is an nfs4 mount called /mnt/logs. The /mnt directory is being included in the docker image via a bind mount, i.e. config in the docker stack deploy:

  volumes:
    - type: bind
      source: /mnt
      target: /mnt
      bind:
        propagation: shared

The application in the docker container then outputs the logs to stdout and writes the logs to the file /mnt/logs/app/logs.log. I can see the logs printed to stdout using docker logs and when I look at the log file on the network drive the file exists /mnt/logs/app/logs.log but nothing is written to until the container shuts down (the application is a batch job and typically runs from 5 mins to 3 hours).

If I go into the container (docker exec -it {name} /bin/bash) and run ls /mnt/logs/app/ the logs appear to be immediately written to. I have tested this several times, observing the network mount there are no logs written to the file and then running ls the logs immediately appear.

Any ideas what I should investigate? And apologies if this is the wrong place to ask for troubleshooting help, I’ve never had to ask for community help on a docker issue before,

FYI this was an NFS cache issue, I was able to reproduce the problem outside Docker.

And FYI if anyone finds this post in the future and is confused on how to solve it there appear to be 3 things which can force NFS cache to flush:

  • The cache buffer filling up
  • The file being closed
  • The parent directory being open/closed

In my case I created a modified logging handler that on flush also scans the parent directory of the log file, forcing the data to be flushed to the disk.