DOCKER Linux ( Ubuntu 18.04 ) Multi-Host

Hi, I’m running docker “Docker version 18.09.3, build 774a1f4” on Ubuntu 18.04 AMD64.

I have 3 computers running the exact same operating system ( Ubuntu 18.04 AMD64 ), docker, docker-machine ect …

My objective is to create a multi-host swarm across this network cluster.

METHOD 1

If I login to Linux, launch the terminal & switch to the root user I can run …

docker swarm init

This will display the different interfaces ips that are available.

I can do …

docker swarm init --advertise-addr “my_public_ip_address”

I can successfully connect the the swarm I created on my primary system.

I run …

docker node ls

& can see the manager node with the workers ect …

However, the containers drop the connection often before I even get a chance to run anything on it. So I leave the swarm & delete the containers to attempt an alternative configuration.

METHOD 2

I create this on my primary system:

su

docker-machine create manager1

docker-machine create worker1

docker-machine create worker2

docker-machine ip manager1

192.168.99.133

I ssh into manager1 to initiate the swarm by running …

docker-machine ssh manager1

docker swarm init

The IPs for the interfaces are displayed again.

This time I’m only able to get internal IPs & can’t setup the swarm to use my public IP so I can connect workers housed on a different system to the primary system.

eth0 & eth1 are not accessible so when I attempt to use the “docker swarm join-token manager or docker swarm join-token worker” it’s impossible for me to connect to the primary system setup with manager1

I setup the swarm with the detected manager1 IP

docker swarm init --advertise-addr 192.168.99.133

The token is generated & it works fine for connecting worker containers on the same docker instance ( host system ) but I want to connect workers containers housed on different systems running docker too.

docker swarm join --token SWMTKN-1-54jrpdmzwn9vxiujptnv0ip8hmqp4h2b67czwsqgn1n3dlnqms-eutquwt5rt4wd3gs7tapiub2x 192.168.99.133:2377

exit

docker-machine ssh worker1

docker swarm join --token SWMTKN-1-54jrpdmzwn9vxiujptnv0ip8hmqp4h2b67czwsqgn1n3dlnqms-eutquwt5rt4wd3gs7tapiub2x 192.168.99.133:2377

#################

Questions

I’m new to docker so please take it easy with me. : P

  1. When I do method 1, it allows me to use my external ip address to setup the initial manager container. Why doesn’t it setup the manager as a “docker-machine” ?

  2. Why does docker setup the manager container as a node & require it to be listed under an auto assigned name that can be seen with the “docker node ls” command ?

The reason I ask is because …

A. I can’t provide my own label to it with the “–name switch”

B. I can’t add additional managers to it with the “docker swarm join-token manager” command but I can run add workers located across the multi-host.

C. I can’t get the docker containers to remain running stable for over 2 minutes without it quitting on me or causing some sort of communication issue.

Method 1 allows me to use my external IP which is exactly what I need to setup the docker swarm across a multi-host setup but it’s unstable. So I tried setting up a docker swarm via method 2 which is very stable but I need a public IP to be able to create the multi-host network swarm.

Questions

After I setup manager1 with “docker-machine create manager1” & create a local swarm. How can I setup the manager1 container to allow me to link workers housed on other systems running docker ?

I can’t do it with the internal ip provided by docker so how do I bind the docker manager1 container internal ip to allow me to link the workers across the multi-host network ?

Please provide simple instructions I can follow. I already read the documentation but I’m not getting it.

Thank you for your help, much appreciated!