Hello,
I’m interesting in the following question:
how to find out the best balance between amount of containers within single host and define time it’s useful to get up one more host machine.
This can be only answered if you can monitor the host system with like munin or other monitoring software and see how the host system resources are getting used. Base on the monitoring result you can plan if you need other host node
thanks, I understand it. As it appears I didn’t describe clearly my question. I meant useful amount of containers among one host node. For example I have one container with nginx and a few containers with web app. My question is in the following:
How to define the number of useful web app containers within one node?
In another words a few - how much is it exactly?
I know that it depends on a node sources, but I would like to find out an approach or algorithm.
It is very hard to tell, it depends on the containers’ needs and all…
What does makes sense, is to have one per CPU if you are hosting something single threaded. But if you really want an algorithm I’d use something like that:
containers = server_ram * 0.8 / average_web_app_ram_usage
containers = 80 / average_cpu_usage
Use whichever is lowest.
Ok, I heard you, and thank you for understanding my question. Does it make sense to spread different apps’ containers within one node? Will it consume node sources more efficiently?
I think do not really understand how containers share resources with each other within single node.
Multiple containers on the same node might become beneficial, but it could also be harmful. It all depends on the application itself… Typical nodes today are just small enough to not have much resources go to waste (that’s the basic of cloud infrastructures, cut down the big servers into small ones to optimise usage). So if you put more containers on the same node, it could hurt your overall performances as they might fight each other.
As I stated earlier, one sweet idea is to use multiple containers to host an app made on a language that can not make use of more than one CPU core, if your node has more than one. Another thing that’d make sense is hosting a container that has big needs of RAM but nothing else with a container that needs CPU only.
In docker, you can limit the resources a container can take, I’ll let you read the docs about it. Containers are similar to simple programs on any host… They share the available resources, the one that needs more gets more etc.
It’s become clearer for me! In particular [quote=“salketer, post:6, topic:28600”]
Another thing that’d make sense is hosting a container that has big needs of RAM but nothing else with a container that needs CPU only.
[/quote]
Thanks a lot!