Attempting to start a service with task=2 on a swarm comprised of 1 manager node and 1 worker node using a private repo in DockerHub as the source image. We should see one docker container on the manager and one on the worker.
When executing this case, we see the manager node starts the container successfully, however the worker node does not. We used the command:
docker service tasks --all
and the worker node shows a state of Accepted but never switches to Running.
On the manager node we performed a docker login with the creds to access the private repo.
Here is the output from docker:
~ $ docker service tasks access
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
960us0bkppw32zcdmrth7l4wa access.1 access hkmconsultingllc/privaterepo:xxxx Running about a minute ago Running ip-192-168-33-175.us-west-2.compute.internal
7usb4649x4nnakn5wrdc5rwuv access.2 access hkmconsultingllc/privaterepo:xxxx Accepted 2 seconds ago Accepted ip-192-168-33-18.us-west-2.compute.internal
Separately we also constrained the service to only run on the worker node and that failed to start. Finally we deployed the docker image as a public repo and it worked fine on both manager and worker nodes.
Steps to reproduce the behavior
- Create a private repo
- Create a task referring to the private repo
- Use docker service task --all to confirm it is not running