[ Placement preferences ]

Hi the docker community,

I have 3 identical servers in 3 differents sites.
They are all 3 with manager role but one is on pause availability.
Each server have a different label with these commands :

docker node update --label-add type=prodgra SRV-GRA-01
docker node update --label-add type=prodrbx SRV-RBX-01
docker node update --label-add type=monistr SRV-STR-01

On my docker-compose I try to use placements / preferences for specify a site to deploy the service and if the host will take down, swarm will reup container on the second site.

But it seems to be that preferences is totally ignored by docker stack deploy .

here is a part of my docker-compose

version: "3.6"
services:

  # Web server
  php:
    image: bitnami/php-fpm:7.4.30
    volumes:
      - ./apps/public_html:/srv_ademe:rw,cached
      - ./apps/php/php.ini-development:/opt/bitnami/php/etc/php.ini:ro
      - ./apps/php/common.conf:/opt/bitnami/php/etc/common.conf:ro
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports :
      - 9001:9000 
    networks:
      - caddy
    depends_on:
      - db
    deploy:
      placement:
        #constraints: [node.hostname == SRV-GRA-01]
        #preferences : i want that this php container is deploying on prodgra in first (if prodgra goes down then deploy on prodrbx
        preferences:
          - spread: node.labels.prodgra
          - spread: node.labels.prodrbx

Thanks for your help
Micka

I am afraid this is not how spread works.

You could use - spread: node.labels.type to spread replicas of a service across nodes that have a node label “type” with different values, though nodes without the label are considered having the label with a value of “null” as well. The order is random. You can not influence the order of the placement of the deployments.

If this affinity is a requirement for you, then swarm is not the right orchestrator for you. Kubernetes allows specifying an affinity.

Note: I am surprised your swarm cluster is even working properly, as swarm (more precisely the raft consensus it uses) requires low latency network connections amongst the nodes.

I know it’s a year later, but I was wondering if you ever found a solution, neosaiyan? I’m trying to do a similar thing, although my swarm is all in one data center (or at least the same ‘zone’). I want a service to prefer certain nodes, and fail over to less-preferred standby nodes.

Meyay, you seem to have a better handle on placement “preferences” than I do… So what is the use case for placement preferences that can’t be solved using constraints + number-of-replicas + max-replicas-per-node?

Apologies for my confusion. I’ve read the docs and experimented with placement preferences, but I seem to be missing something.

It seems having a container prefer a node (while available) over another is not possible with Swarm.

From StackOverflow:

placement preferences try to place tasks on appropriate nodes in an algorithmic way (currently, only spread evenly)

My expectation is that this placement:

    deploy:
      placement:
        preferences:
          - spread: node.labels.preferencex

will be scheduled on a node with the node label preferencex=anyvalue. If no such healthy node exist, it will be scheduled to any other host.

But if you have two or more nodes with different values for preferencex then there is no way to set the order the preferred nodes should be used. Of course one could add another node label that only exists on one of the nodes and add it to the placement preference to prefer that particular node.

A constraint would stick deployments to specific labels, if no nodes with this label are available, the deployment would not get scheduled.

In my case, I want services within a specific stack (or highly interconnected services) to run on the same node to enhance performance. While they can still function if spread across multiple nodes, they perform much better when colocated.

My goal is to ensure automatic failover in the event of node failures, while still optimizing overall performance.

To achieve this, after deployment or reboots, I make sure those services run on the same node by temporarily applying placement constraints, then removing them immediately after:

Move a service to a specific node:

docker service update --constraint-add 'node.hostname == <HOSTNAME>' <SERVICENAME>

Remove the placement constraint afterward to restore failover tolerance:

docker service update --constraint-rm 'node.hostname == <HOSTNAME>' <SERVICENAME>

I hope this helps someone.

I’d also appreciate any feedback on this approach, particularly if there are potential issues I might not have considered.