Registering Docker 1.12 Swarm services in Consul

When I create a service, I want some way to register all node/nodePort combinations to consul (or some other service discovery tool) in order to use a tool like consul-template to generate HAProxy configuration to allow users to be routed to the correct service based on a hostname. Is there any way to accomplish this at the minute, or is there anything coming which will allow this to happen? It feels like I should be able to wrap the service lifecycle with a docker engine plugin in order to publish the services into consul.

I had exactly this problem with php and nginx since ages.

But the most awesome part about 1.12 is that you don’t need to bother with service discovery anymore.

Now when you create a service and provide a name for it, you can use just that name as a target hostname, and it’s going to be automatically resolved to the proper container IP of the service. Moreover if the service has multiple replicas, the requests would be round-robin load-balanced.

This would still work if you didn’t forward any ports when you created your docker services.

You still need to create an overlay network and use it for all of the services that you want to connect.

Creating a network

docker network create --driver overlay test

Creating services

docker service create --name php --replicas 20 --network test php
docker service create --name nginx --replicas 1 --network test -p 80:80 -p 433:433 nginx

And your nginx upstream would look something like:

upstream my_upstream {
  server php:9000 max_fails=3 fail_timeout=60 weight=1;

Thanks for the detailed response. Yeah I can see how your solution would work for internal applications, but you still need some way for users to come in on port 80 / 443 and connect to the nginx container, as all your upstream is doing here is proxying docker’s internal load balancing.

I want to be able to load balance across Docker nodes, so if you had 3 nodes A B and C running Swarm and a service running on nodePort 30000, I want to be able to create something which will have an upstream like this:

upstream my_upstream {
  server A:30000;
  server B:30000;
  server C:30000;

This way if any of your Swarm nodes go down or new ones are added, they will be added to the pool for extra resilience .

Oh, right, just add port forwarding to the nginx service, then, so it’s reachable. And that’s it.

I updated my example accordingly.

You don’t need to define an upstream server per each node. The “php” in my example would be resolved by internal Docker DNS to the complete list of the IPs of all the running container tasks for this particular service, regardless of which node they are running on.

Is there a way to expose the swarm DNS to outside the swarm, so that external access could be discovered directly, without the need for a load balancer?

It’s less the load balancing thats a problem, rather the high availability and hence my wanting to balance across all swarm nodes. I want to be able to register a service with a target port of 80 and a nodePort of 30000 and using a label with the DNS name, I want to add a HAProxy entry for this service listening on port 80 automatically.

It’s less the load balancing thats a problem, rather the high availability and hence my wanting to balance across all swarm nodes.

That could be a tiny bit different topic. For that you should modify your proxy service to be run similar to this:

docker service create --name nginx --mode global --network test -p 80:80 -p 433:43 nginx

Note the --mode global bit. This would ensure the service is always scheduled on all nodes of your cluster.

But there’s another way.

You could still get by just by creating a usual haproxy service with, let’s say, 3 replicas. Those should be spread around the cluster and automatically rescheduled some place else if a node goes down or the service stops for some reason. You could already have your hight availability with 3 nodes. And the best part is that docker now provides you with a node-level load balancing as well. Meaning even if you have a single haproxy service, that listens on port 80 somewhere in your swarm cluster, you can make a request to any node and the request would be routed to your haproxy service container on the node where it runs at the moment. As if you actually had it running globally.

Plus you can play with docker service scale afterwards to fine-tune the stack, when and if needed.

Note: Be advised that if you use docker-machine, the node-level routing might not work for you.

I am seeing similar issues. I’ve tested out docker 1.12 and it looks pretty nice so far, difference is that you could use interlock and start up containers with a dns name and then nginx would automaticly be updated for you. So i could have a attached to the running instances. There would be no need to change a nginx config either.

Your first reply looks good, if i would add a virtualhost to the nginx config of the container it would allow me to do a similar way (since it will forward the requests to the correct container of php) only if my swarm cluster is big, and i would like to add a another service it would not work since port 80/443 would already be in use for the php application. I could use different ports but you don’t want web stuff running on different ports for your clients.

I could change the nginx config to match the new service, but that is not automated. Also if project “a” and project “b” are seperate, then it would not be handy if they share the same nginx container/config.

So the first question i think still remains, start up a service with say a option to add a virtualhost or somewhere in the config or whatever and make a nginx/haproxy/whatever automaticly update that information to forward to the right publish port of the swarm cluster/nodes. (something interlock/consul template/traefik could)

hi there i do like this

    docker network create --driver overlay mynet
   docker service create --network mynet --name redis redis
   docker service create --network mynet --name web my-web

but in my-web container exec

   ➜  ~ docker exec -it f8c0f07ea3e3 sh
# ping redis
PING redis ( 56 data bytes
92 bytes from f8c0f07ea3e3 ( Destination Host Unreachable
92 bytes from f8c0f07ea3e3 ( Destination Host Unreachable
92 bytes from f8c0f07ea3e3 ( Destination Host Unreachable
92 bytes from f8c0f07ea3e3 ( Destination Host Unreachable

because of in thie redis container

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
68: eth0@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
    inet scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:3/64 scope link
       valid_lft forever preferred_lft forever
70: eth1@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
    inet scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:3/64 scope link
       valid_lft forever preferred_lft forever

now the ip is not match so it doesn’t work

more details

I ended up creating my own solution to this problem which seems to be working nicely so far:


Does that really resolve the original posters problem? If I understood it original question correct, you have a swarm cluster with your services exposed on a port and reachable via all swarm nodes.

But then there is also an “external” load balancer that clients would hit to really access the service. Load balancer from a cloud provider, your own one, nginx, haproxy, whatever… But most likely outside the swarm cluster where the app/service runs on.
In the old docker way, you had the information about all containers and their ports available in a discovery service and your “external” load balancers could retrieve that information and build/update its information.
Now with Docker 1.12 and the swarm mode, this feature is no longer available, because you can not retrieve the information (docker host ips and service ports) anymore.

With Regards to the new implementation of swarm service in 1.12, what actions does a 1.12 daemon take when your start the service with --cluster-store and --cluster-advertise set in DOCKER_OPT

Does this mean that a node can be both managed by a docker swarm as well as swarm service, using the standard consul system, as seperate swarms/clusters on top of each other?

As I understand it, there is no way to register <1.12 docker hosts with the new swarm service, and if you would like to continue operating your existing swarm controllers you would need to hold off on using docker swarm service on any new 1.12 hosts you bring up and continue to register the new nodes with e.g. consul.

What is the expected interaction if a node is registered both with the swarm service and a legacy docker swarm? I expect consul will be aware of any containers launched by the swarm service that are running on the node.

Amazing. This is exactly what I was searching for.

I’ll try it.

Have you considered using a “Registrator” running on each node which updates Consul in real time about status of the containers? Its quick and simple to setup.

So, how could i implement a load balancer to my service created by docker swarm?
I mean… i have 3 vm virtualbox (created by docker-machine) and a service with 1 manager (on host) and 3 workers (VMs) created by docker swarm. This service acts as a CDN (service’s image is a modified version of nginx).
I’ve head about using haproxy or nginx as load balancer, with algorithm like round robin, source-based or least-connection based, to redirect traffic to specific nodes (VMs) of cluster. Am i right?
Can anyone help me in that?

Is this still needed considering the docker swarm in the engine now has a service discovery built in?

unfortunately this only works if you start the services before nginx, otherwise nginx will fail to start because it can not lookup the hostname “php”. But mostly we deploy nginx first and then from time to time services come and go. Making it worthless so we again need a service discovery tool

unless you are not using round-robin, which would lead to different problems (dns cache), you actually set one ip for a domain name and this ip would be a swarm manager or consul node. which leads to the problem that we still need an active/passive cluster for the manager where the reverse proxy lives. You could expose consul’s dns capability outbound, running consul on multiple machines which are all registered as nameservers in the domain name registrars. if a consul node fails and a client asks the nameserver whom’s ip bounds to the failed consul node, connection still fails. No solution for this