Docker memory usage and how processes running inside containers see it?

Recently I ran into the following issue:

Ubuntu 18.04. Docker 19.03.8 as well as other machines with older versions.

A docker container runs a nodejs application, which copies large files from 1 location to an other via mounted directories.

When asking docker stats, it says this container is using about 75-80% of all available memory.

This causes other processes in other containers to start swapping heavily. Presumably because they don’t see available memory.

However, when checking the host with vmstat, it turns out that the type of memory being used is buffer memory. Which can be overwritten. So even if there’s not a lot free, that shouldn’t be a problem, right?

See this nifty page: https://www.linuxatemyram.com/

Or is “free” the absolute number being used to determine if memory can be reclaimed/is available? And everything else is ignored?

Setting overcommit_memory to 1 seems like an extreme option. I’m not sure how everything will behave if applications are constantly pushing each other’s stuff out of memory.

Am I misunderstanding something here? Who decides if a process in a container can access an amount of RAM? Is it the Linux kernel, or is docker doing something in the container logic first?

Is there a reason you don’t apply memory limits on your containers?

That’s an option, but I’m not familiar with the behavior. I wouldn’t want a container killing the process inside it suddenly. I don’t know fully how it works.

That being said, what’s going on behind the scenes here? That’s what I want to know. I really don’t want a quickfix and then the reason for the behavior is left unanswered.

What’s really going on with that memory reporting, and dockers/the kernels decisions for allocation based on it?

Since you don’t declare any container limits, each containerized process potentialy is fighting for all resources of your host :wink: One container gone wild, could result in OOM Kills (triggered by the kernel) of other os processes (including containers). Noone actualy runs containers without at least memory limits in a serious environment.

Neither overcommiting, nor heavy use of swap solve the problem that a container can claim unrestricted resources from the host.

Indeed, the opposite of what I described may well happen, as you say. A runaway process grabbing way too much memory is just as disruptive as a memory limit that is too low, killing the process too soon.

I’ll have to look into this. That being said, it seems I also misinterpreted the meaning of “buffer” RAM.

If I understand correctly, this is actually a part of RAM where data is written to, because it is faster, and then later this data will be written to disk.

That would explain why the buffer RAM was filling up.

Am I seeing that right?

Without container limits, the process will “see” plenty of unused memory. Why shouldn’t it use some of it to cache read ahead data or keep data in memory to increase performance? Those processes will still work even if the processes can only claim heavily reduced (or none) buffer.

May I suggest to start with a restrictive limitation first and increase the limit until your container works stable. You might want to consider to use prometheus and Grafana to get long term messurements.

echo 3 | sudo tee /proc/sys/vm/drop_caches comes in three flavors 1,2,3 aka as levels of cache.

Dropping or clearing them might have unexpected effects depending on the level.

On linux you might want to try this:
https://unburden-home-dir.readthedocs.io/en/latest/

The magic comes from the simple idea not to store and make live everything inside your home directory.

App cache is also taken into consideration here:
https://readme.phys.ethz.ch/linux/application_cache_files/

Just " Look through /etc/unburden-home-dir.list and either uncomment what you need globally and/or copy it to either ~/.unburden-home-dir.list or ~/.config/unburden-home-dir/list and then edit it there for per-user settings."

1 Like