Docker swarm: service or container?

Hello, i’m starting to play with docker swarm and after creating the cluster with 3 nodes. Now I would like to create a container/service with centos 7.9 with systemd but I’m not sure what to do.
For the standalone version of docker everything works for me, but here I don’t quite understand.
If I create a container following the directions given on centos 7, would they be fine to make centos 7 replicate in all 3 nodes?
Or to be replicated I have to create a service, so with the command docker service create?
I’m not very clear about the difference between service and container in swarm.
Thank you.

A greeting
Richard

If you want a swarm deployment, you either need to use docker service create or create a compose file and use docker stack deploy to deploy a stack consisting of one or more services. Swarm services are scheduled on nodes that match the deployment constraint (if none is specified than to any of the nodes), which create a service task on that node, which in turn creates the container. Only swarm containers can be connected to the ingress network and participate in the routing mesh.

When working with swarm, you need to keep in mind following things about volumes:

  • volume declarations are immutable (=configuration can not be changed, needs to be deleted and recreated)
  • volumes are local scoped (as in manged locally on a node, if you want to delete it, it needs to be deleted on each node individually!)
  • the volume declaration will be created on a node where a container using it is deployed the first time
  • volumes should point to remote share/storage as a container could be spawned on any node that satisfies the deployment constraints
  • binds (=where a host path is mapped into a container path) are usually unsuited for multi node deployments

docker run does not create swarm services, it only creates containers on the node you execute the command on or against (e.g. if docker context is used).

1 Like

Hi,
thank you very much for helping.
So the nodes should have the volumes shared between them right?
So if I have 3 nodes, must there be a volume that is seen by all three (such as a disk of a NAS), or just one disk on each node that is identical to each other? example: in each node I have the 1.8TB disk /dev/sdb and it is mounted on /data. This in all three nodes.
So the volume must always be specified?
Having clarified this, I can’t figure out why by creating a service with centos7, with the following command:
docker service create --replicas 3 --name centos_test centos:7
docker keeps repeating the processes of ready → starting → pending → assigned.
Is this because it’s because I didn’t create the service from a compose or because I didn’t specify a volume?

Thanks again.

A greeting
Richard

You need docker volumes backed by a remote share, like nfs or cifs, or a volume plugin that provides such functionality. I used portworx-dev for this, but it appears they stopped updating the image required to run it on swarm, while the images required to run in on kubernetes are up to date. If you have no idea what to use, I would suggest to try nfs v4 (causes less problems than nfs v3 or cif).

Each node must be able to mount the remote share.

This is the opposite of what you need. You don’t need identical local node storage. The problem you need to solve is: how does a container access the existing data, when it’s started on a different node.

check docker service logs {service name} to see why the service task containers die and are respawned. Sometimes docker service ps {service name} --no-trunc provides useful hints.

You don’t have to use a compose file to run swarm services. Though, it is highly recommended to use a compose file, as the deployment configuration is stored in the compose file and can be versioned in git.

Thank you.
To be able to make docker swarm digest NFS which paths are recommended to take?
I thought about this way:
In the fstab of the server I mount the nfs volume on the /app folder. I do this on all nodes in the swarm.
In the compose file I establish which is the source and destination folder such as:

volumes:
- myapp:/home/node/app
volumes:
myapp:

What do you think?
Thanks again for the help

Your compose file declares a local named volume without any configuration.

It really depends on whether you want to create a volume that is backed by a bind or want it to be backed by nfs directly. I prefer the 2nd, as it allows making new cluster nodes use the volumes right away without having to tinker with fstab.

Here is an example for a nfs backed volume::

volumes:
  example:
    driver_opts:
      type: nfs 
      o: addr=192.168.x.y,nfsvers=4
      device: :/export/on/share

Thanks again.
In device: :/export/on/share do I have to write the nfs server/NAS directory?
The docker compose up writes me:

yaml: line 19: did not find expected key.
My file is as follows:

version: '3'
services:
  centos:
    build:
      context: /appoggio
      dockerfile: Dockerfile
    image: 127.0.0.1:5000/appoggio
    deploy:
      replicas: 3
    volumes:
      type: volume
      source: testnfs
      target: /pippo
volumes:
  example:
    driver_opts:
      type: nfs
      o: addr=172.16.180.34,nfsvers=4
      device: :/volume1/dockertest
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    tmpfs:
      - /tmp
      - /run
    restart: always
    privileged: true
    networks:
      driver: overlay
      attachable: true
networks:
    frontend:

The line 19 is device: :/volume1/dockertest.

Thank you so much.
Best regards
Riccardo

Your volume declaration looks messy:

  1. You declareexample and volume name, but use testnfs in your service.
  2. The line underneath device: doesn’t belong there.
  3. Non of the sibling elements to “driver_opts” belong there.

It should look like this:

volumes:
  testnfs:
    driver_opts:
      type: nfs
      o: addr=172.16.180.34,nfsvers=4
      device: :/volume1/dockertest

This is how it works on my Syno-NAS as well.
It should work for you if dockertest is a share with enabled nfs-permissions (allowing clients from the cidr range your docker nodes are in) and the share is located on volume1

Be sure to keep this in mind and act accordingly: