Docker Community Forums

Share and learn in the Docker community.

Trying to understand docker with micro-services


(Khaliwell) #1

Sorry if this is the wrong section for this, but I didn’t see one that made more sense.

I am switching the infrastructure of the company I work for over to micro-services using Docker. And for the most part I think I understand how to accomplish that. But I am confused about a few key pieces.

I have been reading a lot of articles and what not about docker, and I understand the basics and have created a container and everything. But I am really confused about like how networking containers together works.

So let’s say I have 3 containers: a frontend container, a users container, and a payments container. Users & Payments are both just APIs that the frontend interacts with. The front end is just an AngularJS application.

So my issue is, how would you network all of the containers so that they could scale independently? I assume you would have a load balancer of some sort, where like users.example.com gets routed to the Users container. Same for payments?

I understand you can expose ports and these get dynamically created on the host. Let’s say there are 2 hosts (each host has 2 user containers running on it), for simplicity’s sake we will say they are in the same datacenter. So users.example.com would route to lets say these 4 places: 192.168.1.2:1234, 192.168.1.2:8523, 192.168.1.3:5621, 192.168.1.3:3741

is that basically how it would work? The load balancer would be responsible for the scalability of each micro-service independently?

Sorry this is kinda long & confusing. I have a lot more questions, but I will try and keep them related so I can get clear answers to each one.


(Khaliwell) #2

I wasn’t really sure how to make this an edit in my original post. But there is a part 2 to my problem that I forgot about when writing the post.

So the APIs, Users & Payments also need to talk together. I would prefer to keep this localized to the same server to keep latency down. Would you so that with external links since they don’t necessarily spool up (or get created) at the same time?

How does that works without updating every container when adding 1 more to the pool?

Thanks again for taking the time to read this!


(Jeff Anderson) #3

So following the terminology from http://12factor.net/, you essentially have three services.

The frontend service has two backing services: users and payments.

Your frontend service will need some sort of configuration for how to reach its backing services. Normally, it’ll need a URL that well get it so it can talk to the service it needs to.

If you want to have a single instance of the users service, this is easy. Your frontend service will be configured to talk directly to that one instance.

If you want to have multiple instances of the users service for redundancy or geographic diversity, then you will need a way to get your frontend service talking to an appropriate instance of the users service. The simplest way to do this would be with something like an nginx or haproxy load balancer. Your frontend talks to that, and the load balancer portion figures out how to route the request appropriately. How does nginx or haproxy know where to route the request? Implement some sort of discovery mechanism. You can interrogate the docker daemon and ask where things are running, and build an nginx or haproxy config based on that. One popular image that implements this is the jwilder/nginx-proxy image.

Another approach would by to have your frontend service do its own service discovery and figure out how to connect to any number of locations for the backing services it needs. Maybe it does some sort of discovery and finds that there are three instances of the users service it could talk to and is aware of all three. This removes the need for a load balancer component, but increases the complexity of how the frontend service does its discovery.

As for your part 2, the answer will boil down to exactly how you implement your service discovery mechanism.


(Khaliwell) #4

That makes sense. I think using an nginx load balancer for the communication frontend and the backing services would be best.

Are there any tools, or articles or anything like that out there that can help automate this process? I would assume you could create a new container, and then add it to Nginxs’ known IPs for a service using ancible or something, but I am not 100%.


(Jeff Anderson) #5

In the docker world, there is the jwilder/nginx-proxy image that helps automate nginx a bit.