Docker Community Forums

Share and learn in the Docker community.

Docker swarm mode service discovery in isolated environment

docker
dns
swarm

(Lvastakhov) #1

I’m trying to create stack in docker swarm mode. I use compose.yml to create stack of services:

version: '3'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
        ZOO_MY_ID: 1
        ZOO_PORT: 2181
        ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - zoo-swarm:/data
      - zoo-swarm:/datalog

  kafka1:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka1
    ports:
      - "9092:9092"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka1:9092"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka1-swarm:/var/lib/kafka/data
    depends_on:
      - zoo1

  kafka2:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka2
    ports:
      - "9093:9093"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka2:9093"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 2
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka2-swarm:/var/lib/kafka/data
    depends_on:
      - zoo1


  kafka3:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka3
    ports:
      - "9094:9094"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka3:9094"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 3
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka3-swarm:/var/lib/kafka/data
    depends_on:
      - zoo1


volumes:
     zoo-swarm:
     kafka1-swarm:
     kafka2-swarm:
     kafka3-swarm:

It works gream if I have internet connection on. But in isolated environment it does not. Kafka service can not connect with timeout. Seems like internal service discovery in swarm mode uses docker.hub for service registry. Am I right? Is there any option to run swarm mode in isolated environment with private repository and no internet connection?

docker version info:

Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.4
Git commit: 092cba3
Built: Thu Sep 7 17:09:45 2017
OS/Arch: linux/amd64

Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.4
Git commit: 092cba3
Built: Thu Sep 7 17:09:45 2017
OS/Arch: linux/amd64
Experimental: false

Thanks for your answer.


(Gand) #2

What do you mean by “Kafka service can not connect with timeout”?
“It works gream if I have internet connection on” do you pull the wire from the servers or how are you shutting down the internet connection?
By “private repository” do you mean the one available on the Docker Hub? That’ll require an internet connection. You should look into deploying the Docker Registry container.

No, the internal service discovery does not require an internet connection.

Here’s what I’d try:
Go into one of the containers. For example, get the ID of kafka3 by using docker ps and then run: "docker exec -it (ID) nslookup zoo1"
If it responds with an IP then the service discovery/DNS is working.
If the executable nslookup isn’t available in your container, try ping.

You could also try creating a new network that’s local to just the kafka-environment. Note that they won’t be reachable from the outside.
I’ve modified your compose file to create a new overlay network and connect all the containers to it. See below:

version: '3'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
        ZOO_MY_ID: 1
        ZOO_PORT: 2181
        ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - zoo-swarm:/data
      - zoo-swarm:/datalog
	networks:
	  - backend

  kafka1:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka1
    ports:
      - "9092:9092"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka1:9092"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka1-swarm:/var/lib/kafka/data
	networks:
	  - backend
    depends_on:
      - zoo1

  kafka2:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka2
    ports:
      - "9093:9093"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka2:9093"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 2
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka2-swarm:/var/lib/kafka/data
	networks:
	  - backend
    depends_on:
      - zoo1


  kafka3:
    image: confluentinc/cp-kafka:4.0.0
    hostname: kafka3
    ports:
      - "9094:9094"
    environment:
      # add the entry "127.0.0.1    kafka1" to your /etc/hosts file
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka3:9094"
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 3
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
    volumes:
      - kafka3-swarm:/var/lib/kafka/data
	networks:
	  - backend
    depends_on:
      - zoo1


volumes:
     zoo-swarm:
     kafka1-swarm:
     kafka2-swarm:
     kafka3-swarm:
	 

networks:
    backend:
	  driver: overlay

(Lvastakhov) #3

Thanks for your answer gand.
No internet connection means - I’m inside private network with no internet. I have my own private docker registry with docker images there. I’ve tried you version of compose but with no luck. Still the same problem:

[2018-02-02 12:51:09,889] INFO Initiating client connection, connectString=zoo1:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@2f7298b (org.apache.zookeeper.ZooKeeper)
[2018-02-02 12:51:09,901] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2018-02-02 12:51:15,902] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-02-02 12:51:19,912] INFO Opening socket connection to server 10.0.2.2/10.0.2.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-02-02 12:51:19,954] INFO Socket connection established to 10.0.2.2/10.0.2.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-02-02 12:51:19,989] INFO Session establishment complete on server 10.0.2.2/10.0.2.2:2181, sessionid = 0x1615691a4800001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-02-02 12:51:20,000] INFO Session: 0x1615691a4800001 closed (org.apache.zookeeper.ZooKeeper)
[2018-02-02 12:51:20,001] INFO EventThread shut down for session: 0x1615691a4800001 (org.apache.zookeeper.ClientCnxn)
[2018-02-02 12:51:20,002] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server 'zoo1:2181' with timeout of 6000 ms
	at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1233)
	at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
	at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
	at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:115)
	at kafka.utils.ZkUtils$.withMetrics(ZkUtils.scala:92)
	at kafka.server.KafkaServer.initZk(KafkaServer.scala:346)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:194)
	at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:112)
	at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:58)
[2018-02-02 12:51:20,003] INFO shutting down (kafka.server.KafkaServer)
[2018-02-02 12:51:20,006] INFO shut down completed (kafka.server.KafkaServer)
[2018-02-02 12:51:20,007] INFO shutting down (kafka.server.KafkaServer)

Kafka can’t connect to zookeeper. Zookeper is started up. Accepts connections. But immediately closes established sessions. I don’t know why. If I put in another network interface with internet connection - everything works fine. What could be a problem?