Query: Why are my services automatically deployed on the manager node?

When I use the instruction “docker deploy -c docker-compose.yaml cluster” to deploy services to all swarm nodes, all decker services are deployed automatically the manager node. Specifically, my docker-compose.yaml file is as following:

version: "3.9"
services:
    node0:
        image: bserv128:0.1
        ports:
            - 8070:6550
        expose:
            - "6550"
        hostname: node0
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node1:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node1
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node2:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node2
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node3:
        image: ttqs123/bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node3
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node4:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node4
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node5:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node5
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node6:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node6
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node7:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node7
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node8:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node8
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
    node9:
        image: bserv128:0.1
        ports: 
        expose:
            - "6550"
        hostname: node9
        entrypoint: ./bin/serv -test.run TestServer -test.v
        networks:
            - eth0
        volumes:
            - ./node.json:/bin/node.json
networks:
    eth0:
        name: net1
        driver: overlay
        attachable: true

I wanna to query which problem that causes the above issue is in the docker-compose.yaml file.

You mean docker stack deploy?

You got a Docker Swarm set up?

What does docker node ls tell you?

Yes, the “docker stack deploy” is my deploy way. I have used the “docker node ls” to test the communication status among all nodes, and the result shows that the all nodes can communicate each other.

I am surprised this compose file can be even deployed with docker stack deploy, as it uses relative host paths, which to my knowledge shouldn’t work with swarm (unless it extrapolates the relative path to absolute path itself). The file would need to exist on all nodes in the same location, otherwise the container will fail starting on the other nodes, which might explain why the services are running only on the node where the file exists. Since there are no deployment constraints configured, the services should be spread amongst the nodes.

If node.json is a configuration file, it should be declared as a config instead of local file, see: https://docs.docker.com/compose/compose-file/compose-file-v3/#configs-configuration-reference. This should get rid of the problem that the file would need to exist on all nodes in the same location.

As the node.json is the file that should be solved by execution instruction serv and I want it to be copied to in the same directory with instruction serv (i.e., /bin/) when the service cluster starts.

It is not copied - it is bound from the host file system into the container file system. As such, it must exist on all nodes on the same path.

You should consider to use a nfsv4 remote share, copy the file to the share, then use a docker named volume to permit all nodes to access the shared files from the remote share.

Did you copy the bind mount file to all nodes? Because Swarm won’t do it.

And if it’s not there, the container start will fail. Then all will end up eventually on the manager.

Yes, I want to mount the file “node.json” to all nodes. Maybe the local volume setting causes all services are deployed on the manager.

I am a freshman. Can you tell me how to use nfs4 in my docker-compose.yaml file? Thanks very much.

Please try the forum search, it should provide plenty of examples.

Did you copy your bind mount file (node.json) to all nodes? You could just do it manually. Alternatively you can create a config to share the file across nodes in Docker Swarm (Doc). I am not a big fan of NFS because you create a single point of failure for all your services when it’s not setup redundant

You can check your services if they have failed on the worker nodes:

docker service ls
docker service ps <service>

Which will show something like:

ID             NAME                                               IMAGE               NODE      DESIRED STATE   CURRENT STATE           ERROR                       PORTS
4hqfcwgfmkel   traefik_dockersock.h4f0hprl28k97n9mq60b1c0ih       nginx:alpine-slim   c1        Ready           Ready 1 second ago
h1fgoc754s3g    \_ traefik_dockersock.h4f0hprl28k97n9mq60b1c0ih   nginx:alpine-slim   c1        Shutdown        Failed 1 second ago     "task: non-zero exit (1)"
xliahfjhybo6    \_ traefik_dockersock.h4f0hprl28k97n9mq60b1c0ih   nginx:alpine-slim   c1        Shutdown        Failed 7 seconds ago    "task: non-zero exit (1)"

Sorry, I’m not clear about your words. Specifically, how do I use “docker config” to place node.json into the /bin/ directory of services of all nodes? Could you give me an example?

I linked the documentation already above: