Microservices & service discovery of a service with random ports

Hi,

My question is related to microservices & service discovery of a service which is spread between several hosts.
The setup is as follows:
2 docker hosts (host A & host B)
Consul server (service discovery)
Let’s say that I have 2 services: service A & service B
Service B is deployed 10 times (with random ports): 5 times on host A and 5 times on host B
When service A communicates with service B, for example, it sends a request to serviceB.example.com (hard coded)
In order to get an IP and a port, service A should query the Consul server for an SRV record.
It will get 10 ip:port pairs, for which the client should apply some load-balancing logic.
Is there a simpler way to handle this without me developing a client resolver (+LB) library for that matter ? Is there anything like that already implemented somewhere ?
Am I doing it all wrong ?

Thanks,

Royee

Hi. I don’t have much experience in this field, so just take this is input for your own thoughts. What you need, and what you wrote yourself, is a load balancer. Nginx can do this for http server, and, as far as I know, also for other services. Your service would only get 1 IP, connect to it and nginx could handle the rest. The problem with this might be the configuration of nginx, given that the IPs of the services might quite often.
Perhaps http://kubernetes.io/ is more like something you need.
Hope this gives you some ideas.

Royee, any luck to report here? I’m facing a similar problem with a microservices architecture, only I’m using Eureka (Spring cloud) for discovery. Creating a service link between the discovery service and one of the client services exposes a handful of useful environment variables about discovery in the client environment that I can use to send a heartbeat to the discovery service, the problem that I think we’re both experiencing is that on the client service, there is no such useful information about the service itself. For instance I would need to understand the CONTAINER_FDQN (as well as port) of any given spawned client service so that I could communicate that to the discovery service. It seems amiss that this information isn’t available on the client service on startup.

Ok, time to eat my words - mostly. It looks like there is some information contained within environment variables that are calculated at deploy time (DOCKERCLOUD_CONTAINER_*) however I don’t see any information about port - which is important because I would like my discovery service to maintain an understanding of specific nodes - plus I implement client side load balancing via Netflix OSS Ribbon - so yeah, port is important. Perhaps an authority from Docker Cloud would like to chime in here. Any additional information would be greatly appreciated.

As others have suggested, you most likely want some form of load balancing. Interlock is a nice tool that’s right up this alley.

My recommendation in addition to checking out Interlock would be to set up libnetwork with the overlay driver enabled (Swarm makes managing networks over the cluster easier as well). Then, every container that you create will have a unique name referenceable by DNS by the other containers on the network, and you won’t have to mess about with ports (you can connect to the same on each container, e.g. foo_0:2379 and foo_1:2379).

Hi Royee! Navigating the microservices maze, huh? One way to simplify your setup is by leveraging tools like Traefik or Nginx as reverse proxies. They can dynamically handle service discovery and load balancing for you, saving you from rolling out a custom client resolver. And for those who might wonder what is the microservices architecture, it is like composing software as a symphony of small, independent services. Each service has its own gig, running independently and harmonizing through lightweight communication. It’s tech orchestration at its finest.