Docker swarm cluster server addition

Our current architecture is: 3 server in the docker swarm cluster all managers.
Our new architecture will be: 2 server in manager node and 3 server in worker node (this 3 servers would be existing manager node which will be downgraded only after 2 new servers have spun up as manager node)

We want to know the easiest process to be done and whether manager node will have container or just worker node? what will happen if worker node is restarted and what will happen if manager node is started.
how do we ensure that the cluster is available 100% here.

Swarm services deployed without placement contraint will be scheduled on any node, regardless whether it’s a manager or worker node.

The service tasks will be rescheduled on other nodes that satisfy the placement and resource constraints.

If you mean “if one of the two managers you want to use is restarted”: the cluster becomes headless , and won’t be able to schedule/control swarm scoped resources, until the manager node is restarted again.

2 manager nodes is a terrible idea: with 2 manager nodes both are required for quorum. if either one becomes unhealthy, the cluster is headless.

You should stick to 3 manager nodes, if you want to be able to compensate 1 unhealthy manager node.
See: https://docs.docker.com/engine/swarm/admin_guide/

1 Like

What is 100% availability for you? The current work-loads should not be re-started or moved?

If you want to keep current services where they are, I would add labels to the existing nodes, add constraints for those to the existing services.

Then add the two new managers, but then only downgrade 2 of the old managers, to still have 3 active managers.

In general you can run your workloads on managers and workers, no issues. You could CPU- and memory-limit the workload services, to never over-burden the managers.

1 Like

@bluepuma77 Many thanks for your suggestion. I have few more queries.

Suppose we go ahead with 3 managers and 2 worker node setup, now we have 15 containers, so how can we distribute these 15 containers across these 5 nodes i.e. 2 managers and 3 workers.
Second query is what will happen if a worker/manager node goes down.

Without any constraints on the Swarm services, each one will be distributed evenly across all node.

If a manager or a node fails, the lost service instances will be re-created on other managers/workers.

From my point of view the challenge is the external reachability of the cluster. For most databases, you can use multiple targets in a connection string.

But for regular DNS for web services, the target IP needs to be reachable. So you need a high-available load-balancer in front of the cluster or use a virtual IP with something like keepalived.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.