There doesn’t seem to be a clear documentation on how docker swarm takes into account the computational resources (cpu, memory …) when distributing workloads on different nodes.
I am experimenting with 2 docker-machines. One is created with a 512MB virtual memory and one cpu and the other more powerful with 4GB and 4 CPUs. When scaling the service (nginx containers) to hundreds of nodes the workload seems to be evenly divided.
Thus does docker swarm consider the computational resources for the node?
Another thing, when scaling the service to a big number (I tried 1000) with a swarm with one node only, the node is being flooded and eventually crashes.
Thus shouldn’t docker swarm avoid flooding nodes that cannot handle such workloads?