docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ifot9ymy1u8mcc677ovzyijba * ip-10-200-1-5 Ready Active Leader 24.0.7
rb5bd8c3jqqyymi6beavbalp7 ip-10-200-1-6 Ready Active 24.0.7
8od31dsx3c34wndwq4jqy4rn0 ip-10-200-1-37 Ready Active 24.0.7
docker service ls ( from
ID NAME MODE REPLICAS IMAGE PORTS
tvywlv5u7057 registry replicated 1/1 registry:2 *:5000->5000/tcp
I can see netstat -an | grep 5000 ( on all nodes)
tcp6 1 0 :::5000 :::* LISTEN
telnet localhost 5000 ( Doesn’t work on all nodes but works on 1 node also and that node node is not necessarily docker swarm leader)
On my master, If I try to push images to local registry .
I can’t do telnet localhost 5000 on master node.
Not sure whether IP6 is creating issue.
ocker compose push
[+] Pushing 1/1
redis Skipped 0.0s
Get “http://127.0.0.1:5000/v2/”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I was using following syntax to create local registry. I am following doc step by step as mentioned here https://docs.docker.com/engine/swarm/stack-deploy/
docker service create --name registry --with-registry-auth --publish published=5000,target=5000 registry:2
So this is not working as expected since port 5000 is working on only 1 node
This one worked since I can see 5000 port and run curl http://localhost:5000/v2/ on all nodes and returning {}. See I added mode global and mode=host
docker service create --name registry --mode global --publish mode=host,published=5000,target=5000 --with-registry-auth registry:2
But still other nodes are not able to pull registry.
All compose yml files are in the linked URL ( exactly same)
docker image pull 127.0.0.1:5000/stackdemo
Using default tag: latest
Error response from daemon: manifest for 127.0.0.1:5000/stackdemo:latest not found: manifest unknown: manifest unknown
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
127.0.0.1:5000/stackdemo latest 0a3bbfac7e99 10 hours ago 84.6MB
docker stack ps stackdemo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
lvjx5m9j9qsq stackdemo_redis.1 redis:alpine ip-10-208-1-37 Running Running 24 minutes ago
u3yse2wv5zb6 \_ stackdemo_redis.1 redis:alpine ip-10-208-1-37 Shutdown Complete 24 minutes ago
na5y6yw2ikxm stackdemo_web.1 127.0.0.1:5000/stackdemo:latest ip-10-208-1-5 Running Running 26 minutes ago
zhsta8h2oish \_ stackdemo_web.1 127.0.0.1:5000/stackdemo:latest ip-10-208-1-5 Shutdown Complete 26 minutes ago
ooc15gnixrku \_ stackdemo_web.1 127.0.0.1:5000/stackdemo:latest ip-10-208-1-6 Shutdown Rejected 2 hours ago "No such image: 127.0.0.1:5000…"
localhost/127.0.0.1 is something different on every node and inside every container.
To access your repository from inside Docker context, you should use the service name or the name you assigned to it, which is used in Docker DNS to automatically resolve to the IP of the container.
You approach is not really the way to do it, with mode global you create multiple registries, but every one has a different instance with different storage. Also you did not use any bind mount or volume, so your registry data is gone when you re-create your service/container.
Thanks @bluepuma77
I was following official doc which shared each step by creating registry and deploying app on swarm so looks like official doc (Deploy a stack to a swarm) isn’t great way to start learning swarm.
Anyway, I was able to solve issue by not using swarm for running docker registry on one of node using following
I agree the example with the local registry is bad, their example Swarm seems to be running on a single node, which is not best practice.
With the registry port open, other nodes can access it with the node IP (not localhost/127.0.0.1). If it runs on the Internet, you can even assign a domain to it, but then you should protect it with user/pass, those should be available in the env vars.
Be aware that support for Docker Swarm is limited, as the majority of users has moved on to k8s. It’s usually just small dev teams that keep running Swarm as they can’t afford the resources of running and maintaining k8s. We still use Docker Swarm in production.