Docker Swarm start service on secondary node

Hey folks, does anyone know how to start (and create) a service that processing only on the secondary node?
I am trying but I get error like the image is never transferred to the second node to start.
Can anybody help me?

Thanks in advance!!

Regards
JJ

And my swarm is composed to 2 servers and I configured the 2 servers as Managers but even with this I cant start the container on the second node running commands from the first node.

Details about my 2 swarm nodes (jacob1 and jacob2)

[root@jacob1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
p9dmst18vwmc registry replicated 1/1 registry:2 *:5000->5000/tcp
[root@jacob1 ~]#

[root@jacob2 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
p9dmst18vwmc registry replicated 1/1 registry:2 *:5000->5000/tcp
[root@jacob2 ~]#

there are my images in node jacob1 that I want to share with the node jacob2:

[root@jacob1 ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
python-pubgfpp-personal latest 1ed0ecae76fc 3 days ago 897MB
python-pubgtpp-personal latest d2781d610ebd 3 days ago 897MB
python 3.7 11c6e5fd966a 3 weeks ago 876MB
registry 2d4f4b5309b1 3 months ago 26.2MB
dockersamples/visualizer f6411ebd974c 21 months ago 166MB
[root@jacob1 ~]#

follow the error when I try to start the service on the 2nd node:

[root@jacob1 ~]# docker service create --name pubgfpp-personal1 --constraint node.id==ut11nwa5nxdkyydmafafuuf8f 1ed0ecae76fc

image 1ed0ecae76fc:latest could not be accessed on a registry to record

its digest. Each node will access 1ed0ecae76fc:latest independently,

possibly leading to different nodes running different

versions of the image.

lcwbecdmlzkn2is6vy7so5zyp

overall progress: 0 out of 1 tasks

1/1: No such image: 1ed0ecae76fc:latest

Swarm is not responsible to transfer images amongst nodes. Swarm relies on images beeing available from a container register, which can be either hosted repos like docker.io or quay.io OR a private container registry (which you need to setup yourself). Swarm will always pull the latest image for a tag and deploy it.

Two manager nodes is a terrible idea! If either one becomes unhealthy, the cluster will be headless and as such not accepting any swarm related commands anymore. Always use an odd number of manager nodes. A 3 node manager only cluster is fine if you want HA, but have limited number of servers.

General notes:

Without placement constraints, the scheduler will schedule a task on any node, which will then create the container. If you want to stick a container to a specific node, you will need to add one or more node labels and use those as placement constraint.

Instead of creating services from the commandline, you should consider to write docker-compose.yml files that create a stack of services. It will make your life easier on the long run.

Thanks!!! And is there a way to start the service as active-standby???

thanks it is working now… i found a way!! :smiley:

You mind sharing your solution? A Forum post resolved, but the solution not shared is kind of meaningless… don’t you agree?

Perfect. I agree with you, my apologies!

I was running the wrong command, follow below the right command downloading the image from my docker hub repository on the second node:

1 - downloading the image from my docker hub repository to the swarm second node (run this command on the second node direcly)

$ docker pull jjacob/python-personal:fpp1.0

2 - start the service running from the manager (node1) with the constraint command to avoid to run this service on the node1, in this case, you are forcing to run the service on the second node (node2)

$ docker service create --name fpp-personal1 --constraint node.hostname!=node1 jjacob/python-personal:fpp1.0

done, now i have the service running on the second node instead of the first node.

Thanks Metin! Regards JJ

Welcome!

One more hint:

if you pull the image from private repos that require a login, it is sufficient to perform docker login on one of the nodes and execute docker service create --with-registry-auth.... on this node. It will share the repo credentials with the other nodes.