How to identify the number of scaled container


I wonder if it is somehow possible, when scaling up a container with compose, to distinct the containers (inside each container) by e.g. a sequence number like the one, docker-compose add to the end of the name.
This would be very useful, because in my case i have to register the service inside the container on a external component, and the registration needs to have a “unique” number.

thanks in advance

The containers do have distinct container id or name.

But does docker-compose pass into the container (e.g. as Environment-Variable)? Or how can I get this information within the container?

Does the following command output container info?
cat /proc/self/cgroup | grep -o -e “docker-..scope" | head -n 1 | sed "s/docker-(.).scope/\1/”

The hostname is set to the container id.

echo ${HOSTNAME}

The output of the command is empty.
The output of ${HOSTNAME} give me a more or less unique id, but my usecase is different. I’m really more interested in the sequence number of the container-name.

The reason for that is the following: Outside the container I have configuration and work folders

When the container is scaling up, each container must take a different config and work. In that example, I can scale up util i have three containers of that kind.

Or do you have another id, how the container can access the corresponding folders?

How are those sequence numbers generated in the first place?

If you mean how the sequence number of the folders are created: Manually.

So when you spool up a new container just map the correct work and config folders to the container volumes.

Maybe it would be helpful if you posted your yml file.

Here is the YAML

version: '2' services: agent: build: agent_image image: teamcity_agent:1.4 environment: AGENT_NUMBER: "${AGENT_NUMBER}" restart: always

I have a shell script, which just exports that number for me:
#!/bin/bash export AGENT_NUMBER=$2 docker-compose "$@"

You don’t have any volume statements in your compose file. Also, you environment statement is wrong. Maybe something like this:

version: '2'

        build: agent_image
        image: teamcity_agent:1.4
        - AGENT_NUMBER
          - ./config_${AGENT_NUMBER}:/teamcity/config
          - ./work_${AGENT_NUMBER}:/teamcity/work
        restart: always

Where the you have the config_1, config_2, work_1, work_2, etc. folders in the same place as the yml file. Adjust the paths on the left side of the volume statements if this is not the case. Adjust the right side of the volume statements to put them in the right place inside the container.

one extra thing though, does the work folder really need to be outside the container? Do it need to be persisted across container runs?

The environment variable works that way, I do it, so it can’t be completely wrong.
Yes, you are right, I didn’t add the volume to this sample.
No, the work folder isn’t needed as volume. I already changed that one. Just the config folder is depending on some variable.

But the problem still remains: I can’t use docker-compose scale 2 because the environment variable always has the same value, but I need the concrete values of the scaling container (agent_1 == 1, agent_2 == 2, agent_3 == 3).
In my point of view, this has to be provided by docker-compose, else there is no way to use the scale feature.

Ah ok, I wasn’t aware of the scale parameter. Interesting.

The way I’d see this is that you shouldn’t be using different configs with scale. It seems to me that scale is supposed to be used to allow you handle larger loads. If your containers need different configs then they should be run separately.

In your case, I would suggest that you should break out the interaction with the external service into it’s own containerised app to act as a proxy. You scaled app containers then talk to your proxy which talks to the external service using the same registration.

1 Like


thanks for the suggestion. That seams reasonable.


I’m new to docker world and migrating a spring boot application to run in docker container. This application has a singleton task that performs these steps.

  1. Updates task status as active in db (to prevent other instances from running the same task)
  2. Executes long running business logic (could take up 20-30 min)
  3. At the end of business processing, updates the task status to complete.

It is possible for this task to crash (e.g. process got killed/power failures) while executing the business logic leaving a stale task status entry behind. During restart, the application checks for stale task status entries associated with this server instance and applies required recovery logic so subsequent
task triggers can run.

In the non-container world, we had assigned a unique identifier to JVM arg to each server instance so task status entry can be associated with a server instance to control the singleton task execution and restart/recovery.

In docker world, scale is used to create multiple instances. Is there a way for an instance to query its instance ID assigned by the container? If not, is there a way to pass a unique identifier during startup?

Pivotal cloud foundry assigns unique instance identifier to environment variable CF_INSTANCE_INDEX that application could use for this purpose and wondering if there is anything similar that docker provides.

Any help/insight would be much appreciated.

Best Regards,

Unfortunately, this is only available for swarm deployments:

I use swarm + multi-host networking to scale out services (Druid), which are publicly available via Zookeeper (single example from the swarm discovery program, which actually is Consul). Now, if I do not specifically define a hostname, the service advertises (hostname) that is unique but can not be solved in other containers. If I knew that the particular container instance at startup (i.e. network name servicename 1) I could have the container advertising, which can be resolved. (Perhaps this is not true in single host implementations, or is this a regression in a swarm environment.). Then, I could get a Docker Server IP, which worked for my current case of use and is probably my workingaround for now, (this would be the product of (hostname-i-cut -d’’ -f1)). Though, that depends on the service getting a port sent to a docker host that is not ideal; there are really no users outside the docker network that need the service, and I’d rather shield it from the outside world by id proofing.