Hi,
I am hoping to get some help in understanding how the OOM killer works in general and any macOS specifics.
I have a docker container which I am allocating 2GiB memory. Within the container I run a Java process which is also being given a max memory of 2GiB (too much I know but I don’t think that is relevant at the moment).
When I run the container and monitor it with docker stats
, the memory usage gets to about 45% before it exits. The Java process exits with error code 137 which is from receiving a SIGKILL.
If I run docker inspect
on the container it shows:
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 137,
"Error": "",
"StartedAt": "2020-05-24T21:39:39.5365223Z",
"FinishedAt": "2020-05-24T21:41:33.6247207Z"
},
...
"HostConfig": {
...
"Memory": 2147483648,
...
}
I commit the container and spin it back up to run dmesg
and it shows:
[336736.836392] Out of memory: Kill process 90118 (java) score 350 or sacrifice child
[336736.838454] Killed process 90118 (java) total-vm:7548324kB, anon-rss:1060364kB, file-rss:0kB, shmem-rss:0kB
[336737.071919] oom_reaper: reaped process 90118 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
So the Java process was using just over 1GiB of memory before it was killed.
I tried setting --oom-kill-disable
but this makes no difference.
I should also mention I have 32GiB of RAM on my host and there is no memory pressure there.
Q1. Why is the Java process being killed?
Q2. Why does docker inspect
say "OOMKilled": false
when in fact it was killed by the OOM killer?
Q3. Why does --oom-kill-disable
have no affect?
Q4. Where does the total-vm:7548324kB
value come from and what does it represent?