How/Can Docker-Swarm load-balancer/ network switch request/traffic internally between similar service tasks

I have 3 node Elasticsearch cluster running as 3 docker services (1 task/container for each service ).

Docker-swarm is 2 node cluster. 1 Manager and 1 worker.

My java-app connects to one of the es1 node(docker container/service’s task) on transport port 9300 internally using this service(es1) Virtual IP (would have load-balance if this particular service had more replicas say 2 or 3) and thereby gets connected to 3 node ES cluster.

Likewise all three services task’s can be reached on the transport port 9300.

Now since this ES cluster and my java-app are deployed in the same overlay network and all containers/tasks are reachable to one and another.

So if this es1 service dies or i removed it and request comes from my java-app on es1, then would my java-app be able to get connected to either of the two remaining service’s task of ES cluster on port 9300, as all services(tasks/containers) can reached/ping each other being in the same Overlay network.

I have tried all i could think of. Didn’t succeed as of now. Don’t think it’s possible.

But anyway, Does docker-swarm network/load-balancer have this capability to switch request to one of the similar ES services all reachable on port 9300 internally in the same overlay network.

If yes, what exaclty needs to be included in the compose-file or which network to choose from.

Quick confirmation will be appreciated.

Here’s my ES cluster docker-compose file

version: '3.7'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
    
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
    
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
   
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
   
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic

volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local



Hello devopssysadmin,

Docker internally switches requests to available replicas of a specific service in a swarm cluster.
However, in your docker-compose(stack) file as I read it, it seems that you declared thee times the same services. I assumed it is because of your constraints. But the whole idea of the swarm cluster is to manage the load balancing of each service.
In my opinion, it is better to declare your service once with some constraints and update your node accordingly by adding labels and so on.

Best Regards,
Fouscou

Hi fouscou,
No, those services are three separate services es1, es2 & es3 and actually are forming an ES cluster in my two node Docker-swarm (manager & worker) cluster.

My question was that is it possible in Docker-Swarm to switch request coming from my java-app to say es1 and if es1 service is removed, would request then be sent to either es2 or es3. (From your reply i think it’s not possible to sent request to es2 or es3 as originally request was for es1. Correct me if i’m wrong).

Secondly, is there a way to make my java-app read available three services es1, es2 & es3(internally within docker-swarm) & select first available among three services using Docker only and not any external load-balancer like haproxy.

Thanks & Regards,
devopssysadmin

Even though using replicas is a clean approach for clustered services, it is not always an applicable solution.

You can make inidividual services (like done above) act like a cluster if the instances use an identical network alias. Then use the network alias to address elasicsearch.