I have two nodes in different aws Regions
Node1, public IP is 220.127.116.11, its private IP is 172.31.4.2, I install UCP server on it with port 8443
Node2, public IP is 18.104.22.168, its private IP is 172.31.40.10
Node1 has been added itself as UCP node.
I add Node2 to UCP server (node1) with below command:
docker run --rm -ti --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp join --url https://22.214.171.124:8443 --san 126.96.36.199 --host-address 188.8.131.52 --interactive
But failed, I can’t see the second node in UCP Dashboard. So I check docker container’s logs on node2, and get this:
time=“2016-03-05T09:53:26Z” level=info msg=“Registering on the discovery service every 1m0s…” addr=“184.108.40.206:12376” discovery="etcd://172.31.4.2:12379"
time=“2016-03-05T09:53:29Z” level=error msg=“client: etcd cluster is unavailable or misconfigured”
root@ip-172-31-40-10:/etc/apt/sources.list.d# telnet 220.127.116.11 12376 Trying 18.104.22.168... Connected to 22.214.171.124. Escape character is '^]'. ^CConnection closed by foreign host. root@ip-172-31-40-10:/etc/apt/sources.list.d# root@ip-172-31-40-10:/etc/apt/sources.list.d# telnet 126.96.36.199 12376 Trying 188.8.131.52... Connected to 184.108.40.206. Escape character is '^]'. ^CConnection closed by foreign host. root@ip-172-31-40-10:/etc/apt/sources.list.d#
So discovery IP is 172.31.4.2, and it is private IP of ucp server, of course node2 is not possible to connect to this private IP.
Why ucp server exposed its private IP as discovery IP to other nodes? How can I fix it with public IP?
My understand, I should be fine to add any nodes which has internet access, but with the test result, I have to put nodes in same network which can connect with private directly?