Swarm service doesn't automatically replicate when worker node becomes available

I have a development setup of one mixed manager and one pure worker node. When I deploy the stack and both nodes are running, the services replicate just fine, however if the worker node is offline at that time, when I boot it up the services from a previous deploy don’t automatically replicate to it.

This is my common deploy block for each service:

deploy:
      # mode: global
      replicas: 2      
      restart_policy:
        condition: any
      placement:
        max_replicas_per_node: 1
        constraints:
          #- node.role==worker
          - node.labels.os == windows
          #- node.hostname == ubuntu-a

You might want to open a bug in the moby github project for this.

The swarm scheduler should periodically check if the desired state is reached, which can’t be the case if only one of two replicas is running. This appears to be a bug!

This seems to have fixed by itself, but it seems a bit flaky.

The services now seem to replicate to a restarted node, when it gets online, but now I have an issue with the global logger service with endpoint_mode: dnsrr.

When the service gets replicated to a newly started node, the dnsrr part of the business seems to completely void the service altogether. No logs appear on that node-2, only on node-1.

Same answer like last time. Though, my recommendation to raise the issue in the moby github project was wrong, it should be in the swarmkit project, as this is directly related to swarmkit itself.