Docker swarm constraints being ignored

I’m a relative newbie when it comes to docker swarm mode, and I’d appreciate it if anyone can point out my mistake.
I’m trying to get open source docker v17.06 to launch my 3 containers on 3 specific nodes in the swarm using docker stack deploy and placement constraints, but find my constraints are always ignored, and docker tries to bring all 3 containers up on the same node, the management node.
Here is my docker-compose.yml:

version: '3'

services:
  dev2-1:
    image: ourrepo/me/zookeeper:3.4.10
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=1"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper
    deploy:
      mode: global
      placement:
        constraints:
         - node.hostname == mynode1

  dev2-2:
    image: ourrepo/me/zookeeper:3.4.10
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=2"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper
    deploy:
      mode: global
      placement:
        constraints:
         - node.hostname == mynode2

  dev2-3:
    image: ourrepo/me/zookeeper:3.4.10
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=3"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper
    deploy:
      mode: global
      placement:
        constraints:
         - node.hostname == mynode3

I’ve tried using node.hostname, node.id, engine labels, nothing seems to get the 2nd and 3rd containers to start on their assigned nodes. I can’t find any examples on the net that actually work for me.
Here is my docker info:

Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 2
Server Version: 17.06.0-dev
Storage Driver: btrfs
 Build Version: Btrfs v3.17
 Library Version: 101
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: v8ux2juufbj7mftmgy08seyfb
 Is Manager: true
 ClusterID: 77nm3z0epoec0xe7rs2b7bwog
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: xx.xx.xx.xx
 Manager Addresses:
  xx.xx.xx.xx:2377
Runtimes: oci runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 3.12.69-60.64.32-default
Operating System: SUSE Linux Enterprise Server 12 SP1
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.813GiB
Name: mynode1
ID: OPPP:67NK:E6TW:DDD4:DTTT:ET3P:Z2VF:IKHB:7GEI:PGXW:JMQB:CHF3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Http Proxy: http://mynode0:3128/
Https Proxy: http://mynode0:3128/
No Proxy: .xxx.com,localhost,192.168.0.0/16,127.0.0.1
Registry: https://index.docker.io/v1/
Labels:
 adp.host=mynode1
Experimental: false
Insecure Registries:
 mynode11:5000
 127.0.0.0/8
Live Restore Enabled: false

Hi,

I’m not sure about the requirements to run zookeeper, but as far as docker composition is concerned mode: global and placement: do not go together under a service config. Either you use mode: global - or - mode: replicated + placement: (optional).

When you specify 3 services (dev2-1, dev2-2, dev3-3) in mode: global, you’re telling docker to run every dev2-? in ALL existing nodes. That means, a total of 9 (3 x 3 dev2-?).

Try to replace mode: global with mode: replicated and replicas: 1. You may choose not specify any and the default is replicated and replicas = 1.

Also, you may want to specify hostname: in the service config to correspond to - node.hostname == mynode1.

Example:

dev2-1:
  image: ourrepo/me/zookeeper:3.4.10
  hostname: mynode1
  deploy:
    mode: replicated
    replicas: 1
    placement:
      constraints:
      - node.hostname == mynode1
  ports:
  - "22281:2181"
  - "22282:2888"
  - "22283:3888"
  environment:
  - "ZOOKEEPER_ID=1"
  - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
  - "leaderPort=22282"
  - "electionPort=22283"
  volumes:
  - /app/zookeeper-dev2/lib:/var/lib/zookeeper
  - /app/zookeeper-dev2/log:/var/log/zookeeper
1 Like

Thank you for your reply.

I tried the changes you suggested - added hostname, changed mode to replicated, added replicas. It has no effect. The dev2-2 container is trying to come up on the mgmt node, mynode1, and fails because the dev2-1 container is already using port 22281 there.

Can you post the edited docker-compose.yml?

Sure.

version: '3'

services:
  dev2-1:
    image: ourrepo/me/zookeeper:3.4.10
	hostname: mynode1
    deploy:
      mode: replicated
	  replicas: 1
      placement:
        constraints:
         - node.hostname == mynode1
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=1"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper

  dev2-2:
    image: ourrepo/me/zookeeper:3.4.10
	hostname: mynode2
    deploy:
      mode: replicated
	  replicas: 1
      placement:
        constraints:
         - node.hostname == mynode2
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=2"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper

  dev2-3:
    image: ourrepo/me/zookeeper:3.4.10
	hostname: mynode3
    deploy:
      mode: replicated
	  replicas: 1
      placement:
        constraints:
         - node.hostname == mynode3
    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"
    environment:
     - "ZOOKEEPER_ID=3"
     - "ZOOKEEPER_SERVERS=mynode1,mynode2,mynode3"
     - "leaderPort=22282"
     - "electionPort=22283"
    volumes:
     - /app/zookeeper-dev2/lib:/var/lib/zookeeper
     - /app/zookeeper-dev2/log:/var/log/zookeeper

This is what I see.

$ docker stack deploy -c docker-compose-dev2.yml zookeeper
Creating network zookeeper_default
Creating service zookeeper_dev2-1
Creating service zookeeper_dev2-2
Error response from daemon: rpc error: code = 3 desc = port '22281' is already in use by service 'zookeeper_dev2-1' (fafortsdnuew826q6eijctqqm)

You cannot have conflicting host ports. The config says that all 3 instances are asking the host to bind to the same ports. Try to specify different port numbers for all instances. Or perhaps you might not want to know which host port to bind to and let the host bind to arbitrary port numbers. So you want to drop the host part and specify as follows for all the instances:

ports:
- "2181"
- "2888"
- "3888"

The issue isn’t zookeeper. The issue is that my constraints are being ignored, and dev2-2 should be launched on mynode2, where it won’t have a port conflict with dev2-1 or dev2-3.

The cluster works fine when I launch via docker run on each host, I just haven’t figured out what magic is required to get docker stack deploy to launch my containers on those 3 hosts.

When you place the hosts in a swarm mode, it’s like you ask the hosts to collectively form a single “big” host.

So, I guess this just won’t work in swarm mode.

I saw a few zookeeper examples on github with a docker-compose.yml file, but docker-compose only run containers on one host, it tells me to use docker swarm deploy to launch on multiple machines. I wondered why I hadn’t found any examples for the latter. Now I know why.

Thanks for your assistance!

Perhaps this link helps, Cannot get zookeeper to work running in docker using swarm mode

Thanks, but that person had a different issue, one where the local host wasn’t using 0.0.0.0 to refer to itself in the zookeeper config file. I have that issue already solved in my entrypoint script.

I am trying now to use his examples of 3 docker service create commands, but I am still running into the same issue, the second service refuses to start, complaining a port is already in use.

This one port per service per swarm seems like an odd restriction. So if I have a cluster of 1000 nodes, I can only run one httpd service listening on port 80 in the entire swarm, I can’t have multiple services using different IP addresses?

I guess it’s time I stopped using google and started searching for a good book on docker best practices, if there is one.

Actually, he isn’t exposing ports at all, which lets the containers come up, and talk to each other, but is not helpful because no other program (outside another docker service perhaps) can communicate with the zookeeper application. If everything in the world ran inside docker already, that would be great, but it’s a chicken and egg situation.

Thanks for the tips.

Just a thought maybe change your ports definition to use the long syntax on v 3.2:

ports:
 - target: 2181
   published: 22281
   protocol: tcp
   mode: host
 - target: 2888
   published: 22282
   protocol: tcp
   mode: host
 - target: 3888
   published: 22283
   protocol: tcp
   mode: host

I think in the short syntax the default mode is “ingress”. Possibly that is where the port conflicts arise.

1 Like

In case it wasn’t clear in my last reply, switch your “version” string on the compose file from:

 version: "3"

to:

version: "3.2"

to enable using the long-syntax port definitions.

@whistl034

It seems a bit unlikely that you need to expose all of these ports on each ZooKeeper instance.

    ports:
     - "22281:2181"
     - "22282:2888"
     - "22283:3888"

They’re conflicting because each one of these service definitions tells Swarm “open this port on each host, to be routed in to the specified port in this service’s containers”. L4 LB (using IPVS) then happens automatically across replicas when ingress occurs on the exposed port.

Most likely you want to put them all on the same docker network and do ZooKeeper-to-ZooKeeper communication using that network and swarm mode’s built-in service discovery. Then the services which you don’t want to expose to the outside world will not have port conflicts, they will communicate with each other using Docker’s private overlay networking.

If you have other services that need to access the ZooKeeper instances they will go on this docker network as well.

It’d be preferable as well if you could roll the non-leader ZooKeeper instances into just one service using replicas: 2 (as well as having a leader service on the same network). But, I am aware of some funny issues related to hostname / bind address that do arise in such a model when using some tech, so I can see how that might not work.

I am hitting exactly the same problem. In my case I want to deploy 3 web services which form a distributed app. Each needs to listen on port 443 and although they are being constrained to different hosts, the docker stack deploy command fails to start with:

Error response from daemon: rpc error: code = 3 desc = port '443' is already in use by service 'votingapp_spdz-proxy2' (zl2nqx7ky93kxqsa8athtihnz)

Use a reverse proxy and don’t expose public ports from your containers


I’m facing the same problem and found that this deploy options is ignored by docker-compose, it says that deploy configuration only takes effect when using docker stack deploy, and is ignored by docker-compose.