Docker swarm - deploying services to different (separated) nodes

Hello,

Last two days I’m working on service deploy separation in my application. I try to separate frontend (low cpu and ram usage) from backend (high cpu usage). After reading documentation i used constraints and labels to achieve this.

System info:

Debian 9
Docker version: 18.09.2
Docker-compose version: docker-compose version 1.23.2
Docker file version: 3.4

For PoC i’m working on, I run Docker Swarm on 3 nodes, all of them in manager mode. This is example of docker-compose I use and with this method separation of services is working. But only when I use 2 nodes.

Service A

deploy:
  mode: replicated
  replicas: 1
  placement:
    constraints:
    - node.hostname == server-1

Service B

deploy:
  mode: replicated
  replicas: 1
  placement:
    constraints:
    - node.hostname == server-2

After checking service A is running on server 1 and service B on server 2. In this point all looks fine. Maybe somebody will need this, and it’s useful, you can use this command to see what services is running on every of your nodes (if you have other command I will take it :slight_smile: ):

docker service ps --filter 'node=server-1' --filter 'desired-state=running' `docker service ls | awk '!/NAME/{print $2}'` |  awk '!/NAME/{print $2}'

Anyway …issue start when I want to run same service on two nodes. I tested few options. First using !=.

deploy:
  mode: global (i tried without this as well got same results)
  placement:
    constraints:
    - node.hostname != server-2

From documentation and for me this mean that service A will be deployed on node 1 and 3, and not on node 2. But with this option services don’t even run/start. When I inspected log i can see:

 1/1: no suitable node (scheduling constraints not satisfied on 3 nodes)

I search on internet,went to ask people around, checked on #IRC Docker channel, and I got answer to use node labels to make this work. I was like OK, even I don’t like it (this PoC will be deployed over 40 nodes, and that is of a lot of labels to add, or remove every time), I tested all tagging options I could think (find) about it. Then I used this labels in my docker-compose for docker stack deployment.

docker node --label-add  'node.hostname=server-1' server-1 
and
docker node --label-add 'node.label=server-1' server-1
and
docker node --label-add 'node.name=server-1' server-1
and
docker node --label-add 'server-1' server-1

I got same error every time, one I posted before. As I got stack I did dun few other tests, changing my docker compose variables like:

deploy:
  mode: global (i tried without this as well got same results)
  placement:
    constraints:
    - node.name != server-2
   ** and **
    - node.labes !=server-2
    - node.name != server-2

  ** then this two together **
   - node.hostname == server-1
   - node.hostname == server-3

I tested because I can see node name from docker info command, I was thinking that swarm will pick this info from API, but it’s not working.

Now I can only say, I’m stuck and I need community to give me some fresh idea what and what I did wrong or what is solution (from experienced users) to run my frontend services on nodes 1 and 3, and backend on node 2.

Any help is appreciated …thanks

Add label (I always use the node id, so I am unclear if using the server name actualy works):
docker node update --label-add server-1=true server-1

Use placement constraint for the label:

deploy:
  placement:
    constraints:
    - node.labels.server-1 == true

Though, I am not sure if the dash character is allowed in labels. So if it is not working, try server1 instead of server-1

Hi meyay,

Unfortunately I found that on hard way. Using labels without “-” was solution. Last night I made my deploy work, but didn’t had time to update my post. I don’t know if we can call this "user case bug’ or core devs didn’t included support for naming labels with ‘-’.

Hope this post will be seen by admins and Docker team, and that other people will have use of it.

Thank you for your answer and time reading all this :slight_smile:

You are welcome :slight_smile: