Docker Community Forums

Share and learn in the Docker community.

Swarm cluster architecture and idea

Hello,
I will start a Swarm cluster and I have a lot of beginner questions.
I have 3 servers (each 32mb, 2 x 900Gb SSD, 4cpu) by the same provider bur in 3 différents datacenter (I have an externe IP but I can link all the servers with an internal IP) and i will make a swarm cluster with this under Debian or Ubuntu. I will used the 3 servers to have a NFS ou other cluster volume.

  • which partition can I do on each server ?
  • which volume type NFS, Ceph, ClusterFS,… can I make to have this volume ?
  • I will used docker for services like PHP, Apache, … but I read sometime the docker is not the best choice for database ?
  • For Swarm, I will have 1 manager and 2 workers…If the manager have a issue, one of the 2 worker will himself promote like the new manager ?
  • If not than I need 3 manager …so the total of my server will be 5 or I can have only 3 servers and each of them will be a manager and not a workers ?
  • If I have more than 1 manager, then I need to have a load-balancer before al there managers…Can I used this like load-balancer (each manager will have 1/3 of the requests) and I need to used only 1 manager and used a other one when the first one have a issue ?

excuse me if I have a lot of questions and maybe stupide one but I will make the good choice before that I start with the installation.

Thanks you

  1. I dont know
  2. If its for distributed data sharing, I think Ceph or GlusterFS are both fine
  3. Yes and also no. There are different oppinions on that but I wouldn’t run the DB in docker since docker can crash at some point and thus will kill the DB with it
  4. No, your swarm will stop working then
  5. Yes and no. You can make all of your nodes managers but only one will be the “real” manager. If he dies another manager will take his place. So you still only need 3 servers
  6. No, the managers don’t get the requests, its the docker services that do. You could use a loadbalancer to spread your requests between your 3 servers (the loadbalancing part has nothing to do with docker). Round robin dns could be a good choice here.

The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker.

A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles. When you create a service, you define its optimal state (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker works to maintain that desired state. For instance, if a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container which is part of a swarm service and managed by a swarm manager, as opposed to a standalone container.

One of the key advantages of swarm services over standalone containers is that you can modify a service’s configuration, including the networks and volumes it is connected to, without the need to manually restart the service. Docker will update the configuration, stop the service tasks with the out of date configuration, and create new ones matching the desired configuration.

When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon. Docker daemons can participate in a swarm as managers, workers, or both.

In the same way that you can use Docker Compose to define and run containers, you can define and run Swarm service stacks.

Keep reading for details about concepts relating to Docker swarm services, including nodes, services, tasks, and load balancing.