I can’t quite seem to get this behavior to work. Is there anything special I need to do? Swarm is not round robin routing requests
Yes, I thought that endpoint mode could be related to DNS namespace outside containets orchestration.
Why not to use consul as an internal service discovery ?
It is already contain DNS on board and can be used for internal service discovery as well as for backend source for load balancer configuration and health checks
I have been using consul in all our projects but I thought that new docker swarm mode will make things easier out-of-box and it will create a Docker solution. If docker swarm mode hasn’t a solution for dns service publishing I will continue using consul
Because it’s one more thing to keep track of. Docker swarm mode has all of this stuff and includes the load balancing as well. The Raft store is built-in to Docker swarm mode.
It does have a solution for DNS service publishing for anything that is running in a
docker service. All services have a DNS entry available based on container name (e.g., you can reach service
curl foo) which is accessible to other services on the same
That’s not “real” publishing of services because it is only known in that network. That’s why I use consul as a “real” publishing service. Maybe I am missing something??, I can use consul as DNS with all services registered and published.
I still do not understand how can I make the Virtual IP address of the service available on the host network. I need a non-dockerized application to access 3 NGINX containers via single Virtual IP / hostname.
@frjaraur To publish a service you can use
--publish to expose ports, but the internal DNS server is not exposed directly. It’s largely assumed that if you want other internal services to be able to access these entries you should have them also in containers.
--publish is the way to do this today. It will expose the port from a service on the designated host port (or an arbitrary host port if none is specified) on each machine in the cluster.
That’s the same as expose port on every engine, it just publish a port in all our engines, which is not why I want to mean with “publish your service”. In other words, you still need a Load Balancer in front of your engines with all of them published and one FQDN:PORT endpoint for the service. If we still using Consul, this is done using published DNS service endpoints. Consul will managed DNS entries for all published services, not only the ones in my docker network as internal DNS do.
Thanks for clarification. This brings in another question - where does the Virtual IP of given service really sits? On the manager node(s)?
The VIP sits on all of your containers for that service. Run an “ip address show” inside your container and look for the ip address that you see with “docker service inspect xxx” under VirtualIPs.
You should not concerned about this VIP. That is an internal thing.
Your non-dockerized application should be configured to talk to the ip address of ALL your docker hosts and the published port of the service.
Thanks. Got it. Is there any API call to get all docker host(s) IP addresses for given service?
@sl4dy You could
docker service inspect from outside of a container to get info about its addressing and networks, and inside of a container (service) you can use DNS at
tasks.service to get a RR list of all the associated service IPs. here is a WIP doc on overlay networking in swarm
How to expose swarm load balancer virtual IP to outside
Do we still need load balancer in front of docker hosts to access services in latest swarm version also?. Is there any way to avoid this load balancer component and access services in round robin way?
The following is my scenario.
I have a service running on swarm and I want to access that service from non-dockerized host which is there on the same physical network. I do not want to use external loadbalancer.
I think we can not use just service name to access service from other machine (non-dockerize).
How to achieve this? which IP can be used to access the swarm service so that load balancing and failover happens.
Thanks for your time.
--publish flag for
docker service create to expose ports from services to outside world
--publish option will force me to use external load balancer for load balancing and failover to happen, is n’t?
@stellapp yes, with
--publish flag you map service’s tasks to host port … with the constraints that if works in
global mode, so you can deploy only one task for every machine that belong to your swarm. And yes, in this way you need an external load balancer. See [this] (https://github.com/docker/docker/issues/30052#issuecomment-271846494) discussion