Docker swarm + vagrant problem ochestration between node

Hello everyone,
Hope you are doing when you reading this post.

I setup some architecture with vagrant and docker to orchestrate my different app. Let share my process with you:

  • create virtual-machines with vagrant (it’s same result if we use docker-machine) and install docker inside each machine, three in my case.
  • Initialize swarm with docker swarm and add node, one manager and two workers in my case.
  • create service with docker service
  • run visualizer

The first thing that disturb me is the service created is assigned to the manager, so when i shutdown the manager machine the workers become orphans and the service can’t be assign to another node.

I try some different thing, i created service with constraint on specific worker, and shutdown the virtual machine of this worker, the task don’t be retrieve by another node.

The replicas or global mode is not the case i want to implement for the moment. I want to assign my service to one node when is created and retrieve this service by another in the case where the machine that run the service go down.

I hope i have done good explain. Please i need your experience.

Thank you!

By default the scheduler creates service tasks (that actualy create the container) are spread amongst all nodes, unless you specific placement constraints like this:

version: "3.9"
services:
  x:
    image: x
    deploy:
      placement:
        constraints:
          - "node.role==worker"
  ....

See docker service create | Docker Documentation for the list of available placement contraints. Furthermore you can use node labels on your nodes and use them in placement contraints of your service as well.

With a single manager node your cluster becomes headless when the manager becomes unhealty/unaivalable. A headless cluster is not able to detect or apply any state changes and therefore is not able to create/remove any sort of ressources, until the master node is healthy again.

With 3 nodes I strongly suggest to run a manager only cluster. In order to maintain quorum in the cluster, you need floor(n/2)+1 healty manager nodes - otherwise the cluster is headless.

Hello @meyay , hope you are doing well.

Thank you for your response, i try it and it work as i expected.

To create my service i run the command:

sudo docker service create --constraint ‘node.role==worker’ --with-registry-auth --name service_name --publish exp_port:in_port path_of_registry

And when i drain worker that has the task the another worker retrieve the task. :slight_smile: