Saving data for elasticsearch in a 3 node swarm cluster

Hi there, i was hoping someone here might be able to give me some input with a problem i’m having.

I have a Docker swarm cluster with 3 nodes and want to run the ELK stack but i am not sure how to store my data.

version: '3'
services:
  master01:
    image: elasticsearch:5.2.2
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master01
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1

  master02:
    image: elasticsearch:5.2.2
    ports:
      - 9201:9200
      - 9301:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master02
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1
      
  master03:
    image: elasticsearch:5.2.2
    ports:
      - 9202:9200
      - 9302:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master03
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1
      
  logstash:
    image: logstash:5.2.2
    ports:
      - 5000:5000
    networks:
      - es
    command: >
      logstash -e 'input { tcp { port => 5000 } } output { elasticsearch { hosts => "master01:9200" } }'
    deploy:
      replicas: 1
      
  kibana:
    image: kibana:5.2.2
    ports:
      - 5601:5601
    environment:
      SERVER_NAME: "kibana"
      SERVER_HOST: "0"
      ELASTICSEARCH_URL: "http://elastic:changeme@master01:9200"
      ELASTICSEARCH_USERNAME: "elastic"
      ELASTICSEARCH_PASSWORD: "changeme"
      XPACK_SECURITY_ENABLED: "true"
      XPACK_MONITORING_ENABLED: "true"
    networks:
      - es
    depends_on:
      - master01
    deploy:
      replicas: 1

networks:
  es:
    driver: overlay

It actually works apart from the fact that my master01,02,03 are created randomly and can be moved around randomly on the 3 nodes meaning that they will replicate their data to the new node when they can’t find it after being remade on a new node.
Over time this means my data exists x3.

I haven’t been able to use constraints properly to bind the 3 elastic services to a node each, and i can’t really seem to find anything that works when searching.

If anyone could give me some input how input on how to proceed that would be very helpul.

I’ve been looking around and gluster might be the way to go. I think it will let me have a shared/replicated volume across my 3 nodes. Found this guide, will post results tomorrow.