The swarm mode seems to have left out external service discovery. Will this be the preferred way to configure swarm in production?
Also interested in this as it would be nice to be able to access swarm services from legacy apps outside the cluster directly via service discovery, without having to resort to proxies.
Is there no way to expose the swarm DNS ?
Aren’t those docs for pre-1.12 service discovery?
Swarm-mode commands don’t seem to have the same SD back end options. Not that I’m complaining, I’m curious about setting up a production swarm in the next couple of months.
Yes. For 1.12
"Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm."
But how is that done in practice? Say I have a legacy web server running outside the swarm, and it needs to call an API running in the swarm. Can this built-in DNS server be queried from outside of the cluster?
As I understand it, within the swarm containers can simply reference other services via their names and the built-in DNS will be used to find the appropriate IP and port automatically, but it would be useful to be able to query it externally too.
If the DNS is a Public DNS it could be accessed.
How do I make the built-in DNS public? Is there a configuration option for that or should I deploy containers running other DNS servers that forward to the built-in one?
I appreciate your responses here. I can’t help but wonder if this goes against the philosophy of Swarm; that is, works out-of-the-box with swappable components. In addition, before I could use any tool that worked with Docker and point it at Swarm (a few examples: docker-compose, interlock, docker-gen, etc). Now, I’m not sure if I can even use those with swarm-mode.
So I’m just a bit confuzzled on the “Docker way” to do things post-1.12.
In the “What’s new in Docker 1.12” presentation, there’s a slide with a complete view of the swarm networking. The entry point for external access says “ELB, HA-Proxy, Nginx (programmed with interlock)”.
I’m not sure if this means interlock still works as usual or not, though there’s a question about it in their github that’s still unanswered.
I couldn’t find a video of the presentation, which could possibly clear things up a bit.
I don’t understand how that link helps.
The new Swarm Mode removes the need for external service discovery. If you did have access to the DNS, I believe the container address you see would be private and not externally reachable without modifying iptables rules. Instead, you only need to publish the port for your service, have anyone external connect to that port on any swarm node, and they will be automatically routed to a container providing the service.
If you need more granularity than that, then you likely need to implement your own service discovery (e.g. consul) that would run in addition to what Swarm provides natively.
It would be nice if there was a way to externally determine at least the allocated port for a service (IP is not needed because of IPVS redirection). It would avoid the need to deploy a consul cluster just for that.
How do I access a published service using domainname:service port? Must we use all cluster engines on a LB?
That part should be easy. From inside the swarm use service_name:port. Port is the internally EXPOSEd port of the container. To expose that externally, you would need some reverse proxy frontend, such as nginx, or HAProxy.
Many Thanks for your answer.
I’ve had some success using the “interlock” project (although you’ll need to use their swarm mode branch at the moment) to act as a HAProxy and to handle vhosts on each node…
Could you explain me how you did it ? If not, could you then give me some articles helping me in the good direction ? Thanks ! I was thinking first of using something like nginx-proxy with docker-gen but it doesn’t support yet swarmmode…