Docker Community Forums

Share and learn in the Docker community.

Multi BOYD machine network on docker cloud


(Paulskinnerac) #1

I have two existing instances setup on AWS, on the same network. I want to split a single docker cloud stack across the two existing machines.

As far as I understand, this will be configured for me if I give docker cloud access to setup the machines on AWS. But I need to use two existing AWS instances.

Is this possible? If so, how would I do this?

(Lizter) #2

Use BringYourOwnNode to add your two existing machines to Docker Cloud.
Then tag them with the same tag, e.g. “production”

Then make one stackfile where you tag every application with “production” like this: tag: production
If you want to evenly distribute e.g. several of the same app container on both nodes, add deployment_strategy: high_availability

If you need something like only one container of a service on every node, look up deployment strategy in the documentation.

Hope this helps :slight_smile:

(Paulskinnerac) #3

Thanks for you response Lizter.

I am able to do all of that. I have an nginx container fronting two container instances containing my container app. The web-1 container is on the same node as nginx and the web-2 instance is on a separate node. The issue is the nginx instance can resolve web-1 and web-2. But it can only ping web-1 (the instance on the same node). It can’t ping the instance on the 2nd node.

I am using the default network settings, which I think is the bridge network. I even looked at it in weave scope and the web-2 instance is not shown as being on the same network as the containers on the primary node like web-1 and nginx. I can see it on the weave scope on the 2nd node, not on the nginx node.

But if web-1 and web-2 are on the same node, then I can ping them from nginx.

(Lizter) #4

I see. Have you tried in the stackfile under the nginx container to say links: web-2 and then checking /etc/hosts on the nginx container if an entry appears? If so you can go like http://web-2. I think this is what Weave is supposed to handle, which docker cloud deploys on every node. Else you might have to wait until 1.12 with swarm integration etc.

(Paulskinnerac) #5

There is no entry for web-2. I have even tried making two different web containers (with separate entries in the stack file, e.g. web1 and web 2) and that didn’t work.

My understanding was the same as yours, i.e. weave would handle this. The docs say it creates an overlay network across your nodes. But I assume that’s only if docker cloud sets up the servers on AWS for you. I have tried that and I can ping both containers.

I have even tried doing it manually on my BOYD machines. But docker cloud just installs a weave network container and not the executable version which is documented on the weave site.

(Lizter) #6

I understand, I have no further suggestions to you regarding your setup :confused:

As it looks like we have a somewhat similar setup, I can however share a possible solution:
What we do in Azure is to have a TCP load balancer in the front that have control over all the (physical) nodes in our Azure Availability Set. We then have one stack file for N nodes of one environment. Each node has its own nginx container (deployment_strategy: every_node) which in turn load balances between typically four instances of the app container and terminates SSL/TLS. The thought is that we can utilize some of the node’s cores by scaling the app containers horizontally, as well as having failover on the nodes by putting them in the Availability Set.
So bascially, we have dynamic configuration that listens to container start/stop and will update the nginx config on the fly on every node separately. We have configured nginx for this purpose ourselves, but there is a haproxy-docker-cloud-bundle-thing that should do the same out of the box, which is searchable under the “new service” page in Docker Cloud.

Maybe not the most hardcore optimal solution, but it works for us :slight_smile:

(Paulskinnerac) #7

Two weeks ago I saw a button in docker cloud to upgrade my version of docker on my box.

Things are working much better. I can now run two dockers on the same machine and load balance them with the ha proxy docker! That gives me the chance to update code on the fly (well nearly). I turn off one docker, update it and restart it. Then I do the same with the second.

I could see by constantly reloading my website that I can get the occasional error during updates. But it’s a lot better than taking my 24/7 site down entirely to update the code.