Unique ENV var value per swarm replica instance

I would like to create a docker service and scale it to 3 instances.

These 3 instances form a cluster, and each one of the containers needs one of its arguments to have a unique value

For example

docker service create --replicas 3 --name myapp -e "UNIQUE_ID=[uniqueVal]" myimage -myarg1 [uniqueVal]

Where each instance managed gets some unique value for [uniqueVal]

1 Like

Sadly, this goes against what a service is designed to be. Replicas in a service are meant to be identical things that are load balanced to do the same the same thing as their other replicas. The number of replicas is designed to be increased or decreased at any time, not just at creation.

There are two possibilities here:

One. You really want three separate services to run with the same image, but with some unique property, and you don’t simply want to manually create three separate services…

In that case you should write a shell script with a loop that runs three times and substitutes the unique values in each iteration.

Two: You really want a unified service (load balanced), but somehow still want each replica to do something different.

The main choice here would be to write your start up code, inside the container, to inspect its ID or something and configure the unique configuration at run time (when it is deployed).

Of course this gets more involved around the issue of how each running container decides it is 1, 2 or 3…; so it can select the unique thing is is supposed to configure. Maybe, when the first three containers are started, it would be easy.

Containers can come and go due to later commands to scale up or down, and failed containers automatically removed and replaced. Their unique ID will change and make deciding if they are now the new 1, 2 or 3 challenging. Whether you strictly care about being 1, 2 or 3, or just different makes a difference in the complication level.

It is solvable; just possibility a little more work than might seem likely on first glance.

Thanks for the reply

I guess this just seems like a fairly common situation. Too bad not exposed in the tooling, others are asking for this as well https://github.com/docker/docker/issues/24110

Each application needs a unique identifier because a 3rd party container (registrator) is responding to the docker host events and registers it w/ that unique id in consul. These services then form a cluster of available nodes. (i’m not talking about LB but like shared set of grid memory they establish)

Seems like there should be some way to have whatever “loop” is already running in the tools like compose and swarm services, that when they iterate to scale up and down, some sort of hook be provided for this. Otherwise, as you mentioned yes, the only way to do this is to bypass all the tooling and write your own scripts w/ your own loop.

I tried to do something similar last year for database clusters, but eventually abandoned using swarm services for containers that access local persistent data.

The way I was thinking was to limit scheduling to a max of 1 replica per swarm node. On each node, I would place a unique env file at a fixed location. The service would mount the directory containing the env file and the database container would source that file to do it’s custom initialization. In my case, the mounted directory also contained the database/persistent data managed by the task.

There are no options yet to specify max replicas per node of 1, but there are several workarounds like using global mode with a constraint that limits the tasks to be run only on 3 nodes that are labeled with the same value. Docker 1.13 also added the ability to tie up a host port for each task in the service, effectively limiting the swarm scheduler from placing more than 1 task on a node.

All of this was too ugly for me to deploy. I eventually just abandoned using swarm to schedule any container that stored its persistent data on the host it is running on. Docker swarm just isn’t mature enough for such services in my use case (service of mysql cluster).

Each application needs a unique identifier because a 3rd party container (registrator) is responding to the docker host events and registers it w/ that unique id in consul. These services then form a cluster of available nodes. (i’m not talking about LB but like shared set of grid memory they establish)

BTW, perhaps, in your use case, you could have the replica on start up poll consul for “unclaimed” unique ids and then update consul with the IP of the replica to “claim” the unique id for the task. Then, the replica could do whatever it needs to based on the claimed unique id. Not sure if this would work for you, but I mention this as a possible direction for you.

Update: Nevermind, I see that you want the task to provide the unique id to consul via your 3rd party registrator. I was thinking it was the other way around… The registrator was providing the unique id to the task so it could do work based on that id.

Yeah the use case is diff.

The unique ID is consumed by 2 things

  • registrator container via a ENV var inspected from the new container
  • the container itself

The container then consults consul to lookup this information by that shared unique id

Anyone else? Thoughts?

You might make use of “-e TASK_SLOT={{.Task.Slot}}”.

See https://github.com/Logimethods/smart-meter/blob/master/start-services_exec.sh#L249

Regards.

3 Likes

i was looking for something similar … in my replicas my entrypoint is gdbserver and I wanted to use an environment variable to set the listening port, so that I could attach debuggers to each replica…
any ideas how to do this sort of thing?

This also works in docker-compose files here’s an example of how I did it with Visual Studio Team Services agents

# sudo docker stack deploy -c vsts-agent.yml --prune vsts-agent
version: '3.4'
services:
  agent:
    image: microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard
    environment:
    - VSTS_ACCOUNT=trajano
    - VSTS_AGENT={{.Task.Name}}
    - VSTS_POOL=Default
    - TFS_HOST=trajano.visualstudio.com
    - VSTS_TOKEN=secret
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock:ro
    deploy:
      replicas: 2

A list of template values are here

2 Likes