Docker Community Forums

Share and learn in the Docker community.

Docker Swarm Service DNS Names

Hi,
I have a Docker Swarm Stack, that consists of a Frontend Service, MongoDB Replicaset with 3 replicas for HA reasons and a service that runs one time on deploy and sets up the replicaset.
For the MongoDB Replicaset I need to specify the DNS Names of the other MongoDBs so that they can communicate with each other. Simplified my docker compose looks like this:

version: '3.8'
services:
  rocketchat:
    image: Frontend
    environment:
      - MONGO_URL=mongodb://mongo1:27017,mongo2:27017,mongo3:27017/dbname?replicaSet=rs0&readPreference=primaryPreferred&w=majority
      - MONGO_OPLOG_URL=mongodb://mongo1:27017,mongo2:27017,mongo3:27017/local?replicaSet=rs0&readPreference=primaryPreferred

  mongo1:
    image: mongo
    command: mongod --smallfiles --oplogSize 128 --replSet rs0

  mongo2:
    image: mongo
    command: mongod --smallfiles --oplogSize 128 --replSet rs0

  mongo3:
    image: mongo
    command: mongod --smallfiles --oplogSize 128 --replSet rs0

  init-replica1:
    image: mongo
    deploy:
      restart_policy:
        condition: on-failure
    command: 'bash -c "for i in `seq 1 30`; do mongo mongo1/dbname --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''mongo1:27017'' },  { _id: 1, host: ''mongo2:27017'' }, { _id: 2, host: ''mongo3:27017'' }  ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'

This works fine, but when I need to update my MongoDB container (Verision update etc.) then all of them restart at once when I do:

docker stack deploy my-stack

The other option to my knowledge is to have only one Mongo Service with three replicas, so i can specify the update_config but then I dont have a dns name for each replica (here mongo1,mongo2, mongo3) so my replicaset fails:

version: '3.8'
services:
  rocketchat:
    image: Frontend
    environment:
      - MONGO_URL=mongodb://mongo1:27017,mongo2:27017,mongo3:27017/dbname?replicaSet=rs0&readPreference=primaryPreferred&w=majority
      - MONGO_OPLOG_URL=mongodb://mongo1:27017,mongo2:27017,mongo3:27017/local?replicaSet=rs0&readPreference=primaryPreferred

  mongo1:
    image: mongo
    command: mongod --smallfiles --oplogSize 128 --replSet rs0
    deploy:
      replicas: 3
      update_config:
        delay: 1m
        failure_action: rollback

  init-replica1:
    image: mongo
    deploy:
      restart_policy:
        condition: on-failure
    command: 'bash -c "for i in `seq 1 30`; do mongo mongo1/dbname --eval \"rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''mongo1:27017'' },  { _id: 1, host: ''mongo2:27017'' }, { _id: 2, host: ''mongo3:27017'' }  ]})\" && s=$$? && break || s=$$?; echo \"Tried $$i times. Waiting 5 secs...\"; sleep 5; done; (exit $$s)"'

Does anyone have an idea how to configure this so that only one db container at a time is down, in case I have an update for my mongoDbs?

Use the --publish flag to publish a port when you create a service. target is used to specify the port inside the container, and published is used to specify the port to bind on the routing mesh. If you leave off the published port, a random high-numbered port is bound for each service task. You need to inspect the task to determine the port.

$ docker service create
–name
–publish published=,target=

Note: The older form of this syntax is a colon-separated string, where the published port is first and the target port is second, such as -p 8080:80. The new syntax is preferred because it is easier to read and allows more flexibility.

The is the port where the swarm makes the service available. If you omit it, a random high-numbered port is bound. The is the port where the container listens. This parameter is required.

For example, the following command publishes port 80 in the nginx container to port 8080 for any node in the swarm:

$ docker service create
–name my-web
–publish published=8080,target=80
–replicas 2
nginx
When you access port 8080 on any node, Docker routes your request to an active container. On the swarm nodes themselves, port 8080 may not actually be bound, but the routing mesh knows how to route the traffic and prevents any port conflicts from happening.

The routing mesh listens on the published port for any IP address assigned to the node. For externally routable IP addresses, the port is available from outside the host. For all other IP addresses the access is only available from within the host.

Thanks for your fast reply :slight_smile:
This is not a problem with exposed port, the first example i wrote works, but my problem is when updating the service I am unable to do a rolling update because you can only specify update_config on service level and not stack level.
The Routing mesh isn’t the answer i need, because the MongoDB nodes do need to know their exact replicas and not one random selected by the routing mesh. The MonogoDB’s are replicating their status and electing a Primary. So they ned to have the excat DNS/IP in their config. So what needs to be in the MongoDB Replicaset looks like this:

“members” : [
{
“_id” : 0,
“name” : “mongo1:27017”,
“stateStr” : “SECONDARY”,
},
{
“_id” : 1,
“name” : “mongo2:27017”,
“stateStr” : “PRIMARY”,
},
{
“_id” : 2,
“name” : “mongo3:27017”,
“stateStr” : “SECONDARY”,
}