Swarm nodes are not able to run freshly built images

docker-1.12.0-rc4
Created a docker swarm with 2 managers (one leader) and one worker

  1. git clone https://github.com/docker/example-voting-app on leader
  2. cd vote
  3. docker build -t vote -f Dockerfile . builds the vote image and available on leader
  4. docker service create --name redis -p 6379:6379/tcp redis:3.2.1-alpine
  5. docker service create --name postgres -p 5432:5432/tcp postgres:9.4
  6. docker service create --replicas 4 --name vote -p 5000:80/tcp vote:latest python app.py

ACTUAL:
docker service tasks vote - Keep doing this and you will notice that the containers on other swarm nodes (except the leader where the vote image is present) is NOT able to get the containers to RUNNING status.

EXPECTED:
The vote image should automatically be copied to other swarm nodes and containers instantiated on those nodes to reach desired state

OBSERVATION:
If we explicitly build the vote image on each of the swarm nodes (so that the image is available on all nodes), then the containers attain RUNNING state on the swarm without failures

I have faced similar issue with private registry. But after researching, figured out that we need pass --registry-auth parameter with swarm create command so that swarm manager will forward the registry authentication details to swarm agents for pulling the images.

Can you check the logs on swarm workers (docker engine logs, not container logs), it will tell you the reason for failure.