Docker Community Forums

Share and learn in the Docker community.

Docker swarm constraints being ignored

Not sure if node.hostname constraint works or not. You could try to add label to the node, follow swarm add or remove label. For example, you could add docker node update --label-add zknode1=mynode1 node-1 to the first node. Then

    image: ourrepo/me/zookeeper:3.4.10
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
     - "ZOOKEEPER_ID=1"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper
      mode: global
         - node.labels.zknode1 == mynode1

Follow it for ndoe2 and 3 as well.

If you are using ‘docker-compose up -d’

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use docker stack deploy.

I am able to bring up zookeeper instance on unique nodes participated in docker swarm.
Refer my comment on following discussion thread.

So I have a similar issue with compose version 3.6. What I’m trying to achieve is to only deploy either service A or B depending on the label value applied to the node. What’s happening is both services are getting deployed (or one is and the second one fails due to port usage).

    image: serviceA:v1
        condition: on-failure
          - node.labels.faces == cpu
      - mynet
      - "8888:8888"

    image: serviceB:v1
        condition: on-failure
          - node.labels.faces == gpu
      - mynet
      - "8888:8888"

On my single node I have defined a label as follows

# docker node inspect swarm-manager --pretty
ID:			0cpco8658ap5xxvxxblpqggpq
 - faces=gpu
Hostname:              	swarm-manager

Is this configuration even possible? I want swarm to only deploy the GPU service when the node has a GPU video card deployed, else run CPU service.

I initially wanted to use global instead of replicated, but I read in another forum that the two are not compatible so I have to use replicated instead.

UPDATE: If I create manually it works as expected

docker node update --label-add faces=cpu swarm-manager
docker service create -d --name serviceA --constraint node.labels.faces==cpu -p 8888 --mode global  serviceA:v1
docker service create -d --name serviceB --constraint node.labels.faces==gpu -p 8888 --mode global serviceB:v1

# docker service ls | grep service
c30y50ez605p        serviceA       global              1/1                 service:v1   *:30009->8888/tcp
uxjw41v42vzh        serviceB   global              0/0                 serviceB:v1          *:30010->8888/tcp

You can see that the service created with CPU constraint worked and the service with GPU was not instantiated (in pending state).

Your compose and cli configurations are different:
#1 cli has additionaly: --mode global
#2 compose has additionaly: --restart on-failure

While the first makes sure a container instance is started on each node that matches your containt, the second on is responsible to restart a failed container.

The idea of having two services bound to the same ports, but only allow to start one of those seems strange to me, as they are ultimately not the same service, aren’t they? I am quite sure, if they are seperated in their own compose files, the behavior will match the behavior when started from the cli.

Yeah I missed the restart policy on the CLI, but it should not matter too much based on my understanding.
The underlying service on port 8888 is the same, one supports CPU and the other supports GPU. They are two separate docker images which are present on the Docker host.
I’m trying to understand why the compose deploy behavior is different to service create.

The problem is port-mapping.

In docker swarm mode, a service-level port mapping applies to all nodes. This means that regardless of what constraints are applied, if a service maps, say, port 443, then port 443 is mapped on all nodes in the swarm. This is as it should be, such that a service can be reached anywhere in the cluster.

In scenarios such as the ones described above, the actual services should not be portmapped. Instead, some sort of gateway service should received the requests, and route it to backend services based of some sort of rules (like labels). One such gateway is traefik.

If a port is published in host mode, the ingress routing mesh is bypassed and the port is bound only on the nodes that actualy run the container:

  - target: 8888
    published: 8888
    protocol: tcp
    mode: host

This is correct. The combination of host mode and constraints can achieve what was originally asked for. The clients would have to know that specific service instances are to be reached via the IP address of specific nodes.

The concept is that services running on port 8888 will be load balanced by a service running on the master node, all worker nodes will be secondary nodes running a subset of services (using constraints is working for this part). I just want to use constraints to determine which service should run on port 8888 for each added node.

So the clients will actually connect to the “service running on the master node” (let’s call it service C), which will redirect to either service A or service B?

If this is correct, then the solution is simple: put all three services on the same docker network, and do not include the “ports” part for service A and service B in your compose file. Service C can reach service A and/or service B without mapping a host or swarm port. Everything else remains the same: your constraints will work fine.

I have all services running on the same network, sorry I stupidly omitted that from the compose code I shared.

As such I have the following defined for all my services

  - safrnet