How to ssh into a swarm service container?

Greetings & Happy New Year!

I just setup a docker swarm on my Ubuntu laptop, with one manager and one worker node, and created a service.

A ‘docker service ls’ shows the service is running.

Since the service container must be running inside either of the two nodes, which are virtualbox docker-machines. I did a ‘docker-machine ssh’ into both, and tried to display the service container with ‘docker ps’. But it output an empty list.

This must be a newbie question: how can I ssh into a service container?

To see all containers running and stopped try
docker ps -a

thanks much for responding!

the output in both the docker-machines node of manager and worker are empty from command ‘docker ps -a’.

Could it be the service is running but the service container not? Is a service created the same as a service deployed?

I have to admit, despite having gone through the swarm documentation several times, I’m still confused with running swarm nodes as docker-machines, and creating a swarm service with deploying one. One section talked about deploying swarm service with a yml file. Another seems to say creating and deploying a swarm service are the same thing.

would an expert mind clarifying this confusion with a simple sentence or two?

A stack deployment (based on a docker-compose.yml) will generate a set of services. Regardless wether you use docker stack deploy or docker service create the created services will be the same kind. Usualy all services that make up the solution are bundled in a docker-compose.yml as a stack.

If you did not use docker-compose.yml based deployments so far: start using them, they will make your life much easier and your actions repeatable. On top you can version your configurations in a scm of your choice

If you execute the command docker service ls, you should see on which node the tasks (~=container) for the service are running. Then open an ssh shell to that node and use docker ps to figure out the container id.

Hi Metin,

thanks for sharing your insight. I finally realized I misinterpret the option --hostname and assigned a name that is neither the manager nor worker node, thinking it’s like the name of a container. I also learned, because my nodes are docker-machine VMs, the -mount option must not use absolute path of the host, rather a shared folder name set at VM level. After that, I could do as you suggested and see the service container running. So a created service, though show up with ‘docker service ls’, is not necessarily running as the deployment might not be possible or successful if there is still errors.

I’ll read about stack deployment.

thanks a lot!
Ben

With a service, the scheduler will search for a suitable node that meets all defined deployment contraints, then creates a deployment task that actualy will create and run the container. Usualy people add node labels and use them as deployment contraints to stick a container to a specific node or node group. --hostname realy is just the container hostname.

Use docker service ps {servicename}instead. Add --no-trunc if error messages are truncated.