Docker swarm mode and local images

I’ve recently got stuck into a problem while using docker 1.12 swarm mode and local built images.
More exactly, I have a swarm cluster consisting of 2 nodes and 1 manager.
I’ve built an image locally, but when I am trying to create a service based on that image, I get image not found errors inside the logs of the other swarm workers:

time=“2016-10-03T13:34:31.382503791+02:00” level=error msg="Not continuing with pull after error: Error: image library/test-image:latest not found"
time=“2016-10-03T13:34:31.382765420+02:00” level=error msg=“pulling image failed” error=“Error: image library/test-image:latest not found” module=taskmanager
time=“2016-10-03T14:50:03.070983105+02:00” level=error msg=“Attempting next endpoint for pull after error: unauthorized: authentication required”

My guess is that this happens because the image was built only on the manager machine, and is not present on the worker machines.
Is there a possibility for automatically build images on all workers ? I do not want to use a remote image repository since I am making a lot of builds and I would have a lot of junk data inside that repo.


You are right : the manager node does not share the local images with other nodes automatically. To achieve this, you must use a registry accessible from all the nodes of your cluster. But you do not have to use an external common remote repo, you can use a private registry image to create a service on the swarm accessible to all the nodes like this :

docker service create --name registry --publish 5000:5000 registry:2

By doing this, all the nodes will be able to connect to the registry on “localhost:5000” and pull the image they need to run the containers of your service.

I have a swam cluster in created in aws. I created the the local registry in the manager node using the docker service create command and pushed the images to registry. I am not able to deploy the services to the nodes using the local registry images , it says No such image: localhost:5000/ . Any idea hot fix this.?

You must create the “regsitry” service by publishing its port to make sure the registry can be reached from each node via ‘localhost:5000’ :
docker service create --name registry --publish 5000:5000 registry:2
Then you can test it on any node of your cluster by doing :
curl localhost:5000/v2/_catalog

You can now tag and push your image :
docker tag myimage localhost:5000/myimage
docker push localhost:5000/myimage

And create services from that image :
docker service create --name myservice localhost:5000/myimage

If you want a sandbox to test it, you can use this awesome lab :

The problem with this approach is that you expose port 5000 to the world which means you can’t safely run it inside the swarm using the –insecure-registry option

In swarm mode you can’t use the ports syntax: “” to restrict it’s access to localhost because the IP is silently ignored as it assumes your publishing to the ingress network, in compose there is a long form for ports (using mode:host) but that does not accept a string for the published field

The above makes it really difficult to see a way of running a local registry inside a swarm, without having to add redundant security


I see the meaning of this approach, it’s definitely a great thing to setup. But when I run the command to start the registry as a service in my swarm cluster, it exits right away. Am I missing sth?

This command ends in error ?
docker service create --name registry --publish 5000:5000 registry:2
what are the logs ? (docker service logs <service_id> or docker logs <container_id> if old docker version)

thanks for responding. your question made me dig a bit and realize a service is not the same as a container. When I run ‘docker service ls’, the service is listed. But I found the service container keeps going down and coming back up again repeatedly. one moment ‘docker ps’ shows the container running, the next it’s not there. then back again, … if I issue the same command every second, 5 out of 6 shows container not running. Apparently it starts and stop right away.

a 'docker logs command gives the following
docker logs 5889447b376d
time=“2017-12-06T20:49:06Z” level=warning msg=“No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable.” go.version=go1.7.6 version=v2.6.2
time=“2017-12-06T20:49:06Z” level=info msg=“redis not configured” go.version=go1.7.6 version=v2.6.2
time=“2017-12-06T20:49:06Z” level=info msg=“Starting upload purge in 36m0s” go.version=go1.7.6 version=v2.6.2
time=“2017-12-06T20:49:06Z” level=info msg=“using inmemory blob descriptor cache” go.version=go1.7.6 version=v2.6.2
time=“2017-12-06T20:49:06Z” level=fatal msg=“open /certs/domain.crt: no such file or directory”

I don’t understand why it needs redis. would you mind sharing what I should do next?

thanks much,

I have this exact problem…

How are people in 2021 handling this (common) scenario?

This is where I"m at - Is it possible to publish the same port in both "ingress" and "host" PublishModes?

I too am trying to run a registry for my swarm so I can use locally built images with compose. This thread is exactly my issue, is there still no solution in 2022? The best I can think of is to run with “redundant security” by using UFW to block port 5000 on all swarm nodes. Kind of annoying, Isnt this something that should be built into swarm? Is everyone out there using the public registry or something? I need this to work without internet. Im using overlay networks specifically to avoid signing my own certificates. How come docker can make an overlay network automatically but it cant figure out that I want an image that I built on the swarm master to deploy to the workers, seems like this should be a basic feature, without even needing to run a registry myself. pulls out hair

What’s wrong with running a registry of your choice? If someone uses private Dockerhub-Repos, the container registry of their cloud provider. or runs a centralized container registry in their organization, why would those want to run (and waste resources on) a registry per swarm cluster?

There are plenty of good free container registries like Harbor, jfrog Container Registry or the build-in registries in Nexus3 or Gitlab. All of those have authentication and authorization build in.