I have been on and off with Docker, so my knowledge of Docker is a bit rusty…
According to the 1.12 release notes, Swarm is now baked in the engine itself (great!)
Swarm allows for intelligent re-routing of network traffic depending on the health of the docker nodes.
Question:
99% of the enterprise web apps require HTTP load balancers/reverse proxy. Is there still need to run HTTP reverse proxy (container) for load-balancing and SSL termination in Docker/Swarm 1.12? … Or can docker swarm do low level port 80/443 routing? If so, is there a way to do “sticky IP load-balancing” and “SSL termination”?
This would allow for a cluster can be configured without resorting to NGINX or HAProxy or any of the clout-proprietary load balancers (such as AWS’ Elastic Load Balancing or Linode’s NodeBalancers )?
I would love more info on this very question. I’ve played around with a local swarm created via docker-machine. I’ve been able resolve dns for a specific service but only from within another container(on the same network mess). What is the best way to expose these services without static host to container port mapping?
In 1.12, --publish of docker service will expose a “Swarm port” on all hosts for that service. This “Swarm port” is a port which gets NATed to a Virtual IP address for each service “task” (container, in this case) using the Linux kernel’s built-in load balancing functionality, IPVS. This is a OSI Layer 4 load balancing functionality, so it is directly at the TCP/UDP level and you might notice that some stalwarts such as ping don’t work with it out of the box. However, IPVS is very fast.
For this initial release, higher level features such as HTTP load balancing and direct sticky session support are unlikely to be included in the Docker daemon’s load balancing duties (there’s just only so many hours in the day and the maintainers are trying to ensure orchestrator stability first and foremost before adding new features). Even if requested they are not necessarily going to be added since the feature line has to be drawn somewhere. So, you will need to still use your traditional L7 load balancers such as HAProxy for some higher-level configuration, at least for now – but, bear in mind that the Docker maintainers are very concerned about quality of user experience and would consider proposals to expose higher level functionality directly in Docker if desired. FWIW, in the Docker for AWS and Azure projects we are also working towards making easy direct integration with those cloud’s LBs possible.
The nice thing about the newest stuff is that even if you have to run your own HAProxy docker service for SSL termination, etc. configuring it can potentially be far simpler than before. If your HAproxy service instance is on an overlay network with the service(s) that it is directing requests to, you can simply point towards the DNS entry corresponding to that service, and the built-in IPVS load balancing will LB requests to to that service’s component containers automatically. Therefore, instead of having to do something like generate the proxy configuration dynamically and reload the load balancer daemon every single time an individual container changes or is added, Docker manages the lifecycle of the network and load balancing from container-to-container. This won’t help with every single use case but will help with some.
$ docker service create --name appservers --publish 30000:80 nginx
9ji18cicp7da3nenc2l9kwtb5
$ docker service update --replicas 3 appservers
appservers
$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
9ji18cicp7da appservers 3/3 nginx
$ curl -s localhost:30000 | head -n 10
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
acefd9e76c42 bridge bridge local
850cb0af5970 docker_gwbridge bridge local
7f9f7d593d27 host host local
4ni70d51m4c7 ingress overlay swarm
44b8bc013c4c none null local
Note that ingress network in the output above. That is what gets created automatically by Docker in Swarm mode to make this ingress load balancing function.
It wouldn’t be today. But you could route to service name via DNS, so the re-write config / reload HAProxy loop is probably much less frequent (per-service vs. per-container). Being specific about which services you are allowing ingress to from the outside world via HAProxy container re-deploy vs. automatically exposing them seems reasonable idea to me, although I admit it’s less “magic”.
e.g., the scope of hacks outlined by this article (which, for the record, is not an official Docker Inc., source) would be reduced as you would not have server ... check line for each container any more. Just one backend and the conn forwarding will be handled by Docker. Additionally in 1.12 a health check could be baked right into the image so that if a container starts failing its health check, it is not just taken out of the rotation, but re-scheduled / re-started automatically. (I’m not 100% sure that’s the exact way health checks will work in the new Swarm stuff, but I don’t see why it wouldn’t automatically reschedule failing tasks).
We create our own L4 lb with native docker and swarm api support .
it add /delete containers from balancing pool by discovering them via swarm with arguments : internal port and label
[servers.sample3.discovery]
interval = "10s"
timeout = "2s"
kind = "docker"
docker_endpoint = “http://swarm.manage.example.com:2377” # Docker / Swarm API
docker_container_label = “api=true” # label to filter containers
docker_container_private_port = 80 # gobetween will take public container port for this private port
We use micro services different way then most of others (on demand streaming/transcoding instances that starts and stops hundreds times/hour) so consul-templates and haproxy or Nginx are useless for us.
also we use external exec health checks - lb run external binary with arguments and check stdout.
I’ve had a go at solving this problem without the need for any config or an instance of HAProxy/Nginx with dynamic config generation. Using the list of services and a few labels I’ve written a service which will route DNS names (and optionally decrypt TLS also provided via labels) to the VIP for a service which is provided by Docker and load balanced internally. You can check it out here:
As far as i can understand from your responses is that, LB can be done in 2 layers, HAproxy and then swam routing. So does that mean that sticky sessions done using HAproxy are useless since on the lower level swarm will be rerouting them if need arises ?
This is correct-- haproxy will send requests to the service’s VIP, which in turn will land the request on any container backing the service. Docker’s service feature was designed with stateless services in mind. Any state that your application might have should live in some sort of durable storage, and not in individual instances of the containers that the service starts.
I resume this post with a question:
i have a docker service composed by N replicas of nginx (modified) image, using as a CDN. I’d like using HAProxy for load balancing traffic to one of these N replicas (or at least to the nodes of service).
But how could i configure HAProxy to do this?
Because i think there is always IPVS load balancing that internally redirect requests to 1 of N containers.