Docker containers memory consumption

I’m working on project where I divided the application in multiple docker images and I’m running around 5 containers where each one has its own image. Following the “One process per container” rule.

For that I’m using a beaglebone black which has only 480Mb of memory. Sometimes when the application runs for some time it crashes due to Out of memory exception.

So I was wondering if I make the images smaller would it consume less memory? How is the memory allocated for each container?

What if I group some images/containers into a single running container with more than one process? Would it use less memory?

I would start looking into multistage-build: keep searching!

I don’t think you would get much advantage by running multiple processes in a single running container. By default, memory is not “reserved” per container - a process or processes running in a container gets memory as required. So if one of your processes is eating all available memory, it will do so even if you club it with other processes.

I suggest you try and discover which one of your containers is running out of memory, and examine its application. You can run the containers with memory limits to help find the problem faster.