Docker containers on hypervisor require reverse proxy reboot when updated

Hello everyone, I am having trouble looking for a solution to this online well… because I am having a hard time trying to describe the issue succinctly enough to fit in a Google search.

I’ll try to do my best to describe the issue here, currently I have running on a server:

  • Nginx reverse proxy, running in a docker container
  • Many other microservices, running on docker containers on the same network as the reverse proxy

As you would imagine, the reverse proxy is accepting requests on port 80, and routing them to different docker containers based on the server name. Nothing special really going on there, what I am having issues with is restarting one of my services, let’s use Wordpress for example, and accessing that container again over the browser after restarting the container will give me a location not found error. The only way I can access that container again over the web is if I restart the reverse proxy all together. This really sucks because we have scaled up to over 50 different microservices that all need to go down for 2-3 minutes just to get one container access to the web again.

Does anyone know whats going on, and what some potential solutions to this could be? For what it’s worth my nginx container is using openresty, and running a script that is refreshing itself whenever changes to configuration files are made. There could be a solution there but just throwing out ideas.

Good morning and welcome!

As I don’t know your NginX’s configuration I can only guess and write what I did for a similar sounding construction.

I guess you configured the container-names as destination for NginX’s reverse-proxy-activity which means NginX will resolve the DNS-name and cache it for some time. If you recreate a container it is likely that this container will get a new ip-address from within the docker-network. You can check this with docker container inspect *containername|containerid* before and after recreating the container.
If this assumption is correct you can choose from (at least) these possibilities:

  • configure the NginX-container to not cache DNS-informationen
  • go for Traefik as a reverse proxy instead of NginX as the destination containers will “tell” Traefik on which IP they are, on which port they provide their service and which hostname and path they want to provide their service.
  • I’ve heard that there is a NginX-image providing similar things as Traefik, but haven’t used this up to now.

I am using the Traefik-way as I don’t have to modify my reverse-proxy-container directly upon adding/removing/modifying a backend-container. Traefik is providing the SSL-encryption, redirecting every http-request to https ans also acting as a reverse-proxy for other protocols which are not directly accessible on this hostname from the outside world. You can add manual proxy-destinations, too. So you do not need to have all proxy-destinations in containers on this docker-host. For me it was a steep learning-curve, but now I think that it is the coolest thing that can happen for this purpose.

Hey, thank you for getting back to me so quickly! This is ultimately what led me to my solution, upon inspecting my network I noticed the subnet config, which then led me to assigning static ipv4 addresses in my compose file.

Now whenever I restart my containers, I no longer need to restart my reverse proxy!

Thank you again! :grin: