Compose, or CLI or API for adapting resources for individual stack instance

Lets say I am running a stack defined in a compose file using swarm over a large numbr of worker nodes. Lets says also that I am using the global mode for each service; so each node is running the full stack.

What would be the best approach to handle the resource(s) that need to be unique per running instance (notably, but not limited, secrets, or labels, or storage, or whatever). You can change the compose file, or create individual ones, on the fly. Or use the CLI/API. I suspect that the better approach is the later but is there a consensus on this? Am I wrong?

You want individual instances of the services in your stack to access different resources? That sort of goes against the idea of a stack definition.

I would probably create multiple stacks, each using two compose files. The first compose file would be your current one that defines the services, and the second one containing project-specific overrides.

Interesting feedback, but that raises more questions…

Yes, I understand that goes agains the idea of a stack definition.

But lets take a very concrete example; lets says for each worker node, a given service, part of the stack, needs a TLS certificate. Normally, you want each individual instance to have its very how certificate, i.e. no certificate reuse since it is againsts basic security policy. So you want to create per instance secrets that contain the certificates.

As an extra item to consider; lets say that additional worker nodes are added regularly. And that some may be deleted/stopped.

Would you still recommend 2 compose file? I guess the 2nd compose file would be per instance? Is it possible to reference a specific container on a specific worker node? I don’t think so since compose files refers to service, not running container. But there may be something I missed.

Conceptually, each instance or “replica” is the same object as far as the orchestrator is concerned. If, for instance, you expose a service via port mapping, the mapping is global to the cluster. A request made to the mapped port may be routed to any container of that service on any node of the cluster - so configuring instances differently would be counterproductive.

Still, if you really want to, there are some ways. The actual work would have to by done by your application. What Docker can give you is a method of passing some meta-data into your services, via templates. You can use templates for providing hostnames, volume mount information, or environment variables to your services. Available template values include Service name and node name. So, you could use something like this:

docker service create --name myservice --env CERTNAME={{.Service.Name}}-{{.Node.Name}} myimage

and then use the CERTNAME variable to locate the correct certificate.

You can find a list of available template values here.

Yes, I understand that I am trying to do something that is not along the lines of what swarm is made for. It is not a SaaS, or cloud cluster of something that needs multiple instances of services to scale for a given job. What I am trying to do is something that gets deploy at customer sites to perform the same task but where there is some binding between the task and the site (what ever it is). On top of that, there are things, of administrative nature, such as the certificate I mentioned earlier.

Now, will docker/swarm will want to venture into other stuff than just the “simpler” cloud hosted service? Actually, even for this type of software deployment, someone that wants to leverage the ability to use different cloud providers (for whatever the reason but cost is certainly one of them) am sure will be face with the need to somehow tailor a little on a per instances basis (or type of instance). This is theoretic and I do not have an example in mind but anything that is not 100% uniform may require some adaptation.

The template stuff appears too limited.

So, I guess my original question of compose vs CLI/API adaptation is answered; it is up to the application to use the CLI/API to adapt the instances.