How Swarm mode decides to update the service?

I have several projects running inside several one-node swarms (one node - one project). And after about a year maintaining them I’m totally confused.

When the service has image’s tag set from an environment variable, the service is updated on every docker stack deploy execution even if the actual tag value was not changed. Except sometimes it doesn’t.

If I only update the mounted files, e.g. prometheus rules yaml, prometheus service won’t be updated during stack deployment. Except sometimes it will.

In other words, I really don’t understand how exactly swarm makes the decision whether the service must be updated, and so I can’t predict what docker stack deploy will actually perform, and can’t control it. The documentation is not clear on the topic.

Is there any source of truth around?

The source code is the source of truth. I am not sure if the docs get into the level of details you are looking for.

Though, you can raise an issue in the SwarmKit GitHub repository, so that the maintainers can actually respond to it:

I can still share what I remember:

  • Swarm stack deployments will always look up if new images exist for the used image tags, and whenever a tag points to a new digest, the task will be replaced with a task that bases on the new image with the new digest. Swarm services reference images always as repo:tag@sha256:{digest}, regardless whether it’s declared as repo:tag or repo:tag@sha256:{digest} in your compose file.

  • updating the content of configs and secrets won’t result in a new deployment, that’s why people usually append something like .v{n}, as in .v1, .v2 to the resource name, which requires an update of the resource name in the service as well, which results in an update of the deployed task

My experience is that you can’t change a config or secret file content, as the next docker stack deploy will throw an error that it can’t update. I always had to remove the whole stack first, which is not ideal.

Maybe you have fixed the version with a major.minor version, but the patch level has changed. Then you have a new version which might result in a container update, even though the “version” used stays the same.

I declare my configs and secrets like this:

---
version: '3.7'

services:

  myservice:
    ...
    configs:
      - source: myconfig.v1
        target: /path/in/task-container/config.yaml


configs:
  myconfig.v1:
    file: /path/on/host/config.yaml

When I change the config.yaml on the host, I change the .v1 to v2.

---
version: '3.7'

services:

  myservice:
    ...
    configs:
      - source: myconfig.v2
        target: /path/in/task-container/config.yaml


configs:
  myconfig.v2:
    file: /path/on/host/config.yaml

As a result the tasks get updated on a docker stack deploy. The raft log will not update a config or secret, once it is propagated in the raft logs. Changing the handle makes it unique again. Let’s assume you update the handle multiple times, then when the stack gets removed, all the versions will be deleted with the stack as well.