Recently I ran into the following issue:
Ubuntu 18.04. Docker 19.03.8 as well as other machines with older versions.
A docker container runs a nodejs application, which copies large files from 1 location to an other via mounted directories.
When asking docker stats, it says this container is using about 75-80% of all available memory.
This causes other processes in other containers to start swapping heavily. Presumably because they don’t see available memory.
However, when checking the host with
vmstat, it turns out that the type of memory being used is buffer memory. Which can be overwritten. So even if there’s not a lot free, that shouldn’t be a problem, right?
See this nifty page: https://www.linuxatemyram.com/
Or is “free” the absolute number being used to determine if memory can be reclaimed/is available? And everything else is ignored?
Setting overcommit_memory to 1 seems like an extreme option. I’m not sure how everything will behave if applications are constantly pushing each other’s stuff out of memory.
Am I misunderstanding something here? Who decides if a process in a container can access an amount of RAM? Is it the Linux kernel, or is docker doing something in the container logic first?