I have a window 10 enterprise laptop with 16 GB RAM and i7 specs. I have total of 9 containers running a set of functionality for a business application. Before running the containers, I have 3 GB consumed memory in task manager. When I run my compose file to instantiate containers, Docker is assigning 700 MB by default to each container. One of the container is MongoDB. I had to explicitly set mem_limit to 3G to make it work. However now all containers are consuming 8GB of RAM. In total I have only around 2 GB free RAM remaining. It is too low for me to run other application stuff. All containers are very light weight in functionality nature. They are only passing through most of the data. I really wanted to spin up each container as 200 MB at max. However mem_limit is not respected/entertained by Docker engine. Docker is still assigning 700 MB to most of my containers.
- Has any one faced any high memory consumption issues on Docker for Windows?
- Has any one used mem_limit in a working way?
I have explored lot of articles. I can not use Docker mem swap switches as I am not in a Swarm mode. My application only needs to run on a end-user laptop or machines.
This memory high consumption issue is really frustrating. I have spent two days of research but no definitive answer to optimize my compose file or containers. I even tried applying -memory switches on individual docker run command. But same results.
Docker for Windows : 18.0.6 Community free edition.