Container minimum memory allocation

Hello, I am new to docker. Is there a way to guarantee a container to have always enough memory to run correctly? I found an option —memory that limit the amount of memory used by a running container but it is a maximum limit… not guarantee…

Interesting question. I am not 100% sure, but I would expect that is not quiet possible with Linux Containers in general. But I am happy to be corrected here. :slight_smile:

My line of reasoning: Containers are ‘just’ a process group spawned into independent namespaces by the kernel. The resource consumption can be strangled using cgroups, but the kernel is in charge of scheduling the resources on the hardware itself.

For a guaranteed physical memory for a given process you can rephrase the question if it is possible in Linux in general. If so, it should be possible with containers as well (even though it might not be implemented).
Spontaneously I would reckon you have to get rid of the virtual memory subsystem, so that each process’ memory is mapped directly to a physical region and won’t be swapped out.

But as said, that would be my initial answer; maybe I overlooked something. :slight_smile:

EDIT If your system does not have a swap region and you process is allocating the heap at startup (the JVM can do this), you might come close. But what you are asking (if I understand you correctly) is, that the docker-engine checks whether or not the system has X amount of memory available to start a container. Even allocate the memory exclusivly for the processes within the to-be-started container before it is started. As of today not possible.
Even thought it is an interesting question… Can the kernel pre-allocate memory for a process group before the processes are started?

As you say, Linux just doesn’t work this way. There are various practical problems with physical memory overcommit that would also need to be addressed.

The higher-level schedulers around Docker definitely have this capability, though. Containers in Kubernetes pods have requested and maximum memory requirements, and the Kubernetes scheduler won’t schedule more containers on a single node than it has memory. Nomad does something similar. But, these are much harder to set up than “just” Docker.

True, you can reserve the memory, but this is a just a weak guarantee, right?
First the docker engine does not know about other processes on the host and is also not actively monitoring the memory consumption of containers running on the docker-engine without any memory constraints.
So if a non-container process or a non-constrained container goes crazy, that’s out of the memory picture.

Furthermore the reservation part is only a soft limit, which means it won’t get the container killed if this limit is exceeded unless the kernel detect some contention. But I haven’t had a use-case in which I used this extensivly, so I am not completely sure about the reservation,soft-limit/hard-limit stuff…