Load balancing in 1.12

What are people using as a load balancer for Swarm services?

Hi, I am trying nginx, it seems working fine with default network, ingress(1 instance of nginx and 2 instances of web services), it is routing request to both the instances in round robin fashion. Using 1.12-rc2 version.

I was think similar, with Round-robin DNS spreading the requests across all the cluster nodes

How do you set up nginx to discover the nodes in the swam to LB between?

I’m using Interlock: https://github.com/ehazlett/interlock with HAProxy (it also supports nginx). Interlock listens to events from the swarm cluster to register or remove containers from the LB as the containers start up or go down.

I’ve been experimenting with nginx as a front end for the built-in service scaling and round-robin DNS load-balancing in Docker 1.12.

Here’s a little demo, in case it’s useful to anybody:
http://statusq.org/archives/2016/07/03/7691/

Basically, in nginx config, I have used the service name which was specified during service create. Looks like nginx could route request to multiple instances(on multiple vms) in round robin fashion.

Doesn’t swarm services in 1.12 already provide its own LB on each swarm worker/node?

i`m using own wroted LB named gobetween
We are implement docker/docker swarm api discovery and many other types including DNS, SRV dns , JSON, Plaintext URL and also exec discovery provide you a possibility to create any scripts for any discovery . LB parse stdout and check for script output in needed format.
This lb reconfigured on the fly and also have API and stats API (0.2r)
wiki wiki

Yes, you are correct, swarm mode provides in build load balancer. In my situation I had to add basic auth feature, and I used nginx for that and linked(I mean via service discovery) nginx with my application’s container. Looks like behind the scene its swarm’s load balancer routing the request. However with this setup I am facing some issues with rolling updates, after updates, requests goes to only one instances. I have to stop and start both the services to make it work as expected. digging more on it.

Yes, it has but I wondered about how people are routing traffic to the swarm cluster itself. I am actually settling on a set up based around Consul.

I see, I was and I am very excited with swam mode as it suppose to reduce lots of setup. I am still trying it patiently :slight_smile:

swarm internal DNS should only keep entries for healthy containers .

If a node goes down it still keeps redirecting to the container which were residing on that node .

To reproduce you can make a two nodes setup , deploy two services & scale them to 2 ( using spread strategy ) . Bring a node down, login to any service on one node & ping the other service multiple times . You will see that you will be able to ping only one container , the other container will not respond to ping but it will stay in the DNS.

I guess this is where we use interlock + nginx, consul template + nginx + registrator

I am still unclear about this . May be somebody can help .

So, how could i implement a load balancer to my service created by docker swarm?
I mean… i have 3 vm virtualbox (created by docker-machine) and a service with 1 manager (on host) and 3 workers (VMs) created by docker swarm. This service acts as a CDN (service’s image is a modified version of nginx).
I’ve head about using haproxy or nginx as load balancer, with algorithm like round robin, source-based or least-connection based, to redirect traffic to specific nodes (VMs) of cluster. Am i right?
Can anyone help me in that?
Thanks.