Hello all,
I’m seeking a method to automatically load balance my Docker Swarm cluster. For example, if I have a Swarm with 4 nodes (3 workers), if i already have 50 containers on the first node, 30 on the second, and 10 on the third, how can I ensure Swarm redistributes the containers evenly across the cluster? (30 container on each node)
Thank you
docker service update --force <service-name>
will restart and redistribute all containers of a service.
But it might not distribute across all nodes, if some nodes are already extremely busy.
Hi, Thanks for your response, this command docker service update --force <service-name>
is to update a service. actually i’m lookin for something that can :
- Balance workloads for optimal performance.
- Scale and manage computing resources without service disruption.
it’s like Distributed Resource Scheduler on VmWare ! idk if it exists on swarm or an external tool maybe !
--force
will force the re-creation of the containers, which will lead to (some) re-balancing (doc).
With Docker Swarm you may have service disruption for single containers when re-scheduling.
To avoid this, use the routing mesh or place a reverse proxy in front of it. Use rolling target service updates to replace one container after another, so there is always one running to receive and process requests.
There used to be a project called orbiter… but the project is orphaned since 6 years. I have no idea whether there is a successor.
Though, you could build something yourself based on the Prometheus ecosystem: you could use metrics exporter like cAdvisor to gather metrics about containers and node exporter for node metrics, store them in Prometheus, and use the Alertmanager to trigger actions based on self defined thresholds.
yes it seems a good idea to use alert manager, but i think it’s a limited solution, like i have to deploy all the services to autoscale the swarm nd then ther will be disruption. I will see what i can find with the orbit, it’s actually what i was looking for i will search for an alternative, Thanks a lot.