Best practices for NGINX, uWSGI, and Flask webapp?

I’ve been wrapping NGINX, uWSGI, Flask, and supervisord into a single container in violation of the single-process-per-container design pattern. Any Redis or DB connections are already in their own container, it’s just the NGINX and uWSGI tuple that I’m trying to break up into their own containers

I’d like to understand the recommended method for splitting these up into their own container, and what the benefits are.

Some of the concerns I’m struggling with:

  • Docker logging uWSGI and NGINX to the same log file is a pain. I recognize this, and would like to split them up into their own containers.

  • The latency between having NGINX communicate to uWSGI via UNIX socket on the same container vs. SSL/TCP over the network to a different container is measurable. What’s the quickest and most secure method for NGINX to communicate with the other uWSGI containers?

  • In the Flask development flow, it’s common to have the static resources (images, css, javascript, html, etc.) hosted in the same directory structure as the Python code, and you can expose this to NGINX directly when they’re all on the same container (so static requests never hit uWSGI). But how would an isolated NGINX container know how to serve static resource files when they exist on a different container? Would we need to split the codebase so the static resources are on the NGINX container, and the Python code on the uWSGI container?

  • Most of the literature I’ve read suggests you have isolated containers to enable horizontal scaling. For a single webapp, we could configure NGINX as a load balancer to route traffic to multiple uWSGI containers … but we’d need to know the hostnames of those containers when NGINX first starts. Any examples/blogs of on how to have NGINX separated from uWSGI, and how that enables the horizontal scaling of the uWSGI containers would be welcome!

  • What’s the benefit of having NGINX in the stack, if we’re treating each uWSGI container as a throwaway container that can be restarted at will?

  • Any code samples of best practices for orchestrating these multiple containers together for the single webapp is welcome! To date, I’ve just been throwing them all into a single container or use docker-compse. But then I need a Makefile to build the dependency containers, and it’s not as streamlined. I’m sure there has to be a better solution.

1 Like

I’d love to hear on what configuration you finally settled. I am currently running a web server that has a basic nginx containter to serve static websites and redirect requests for certain hostnames to an Apache server that is running in a different container on the same system. A database is running in yet another container. But now I need to host a Flask app, and I am trying to understand is there another way than creating a bloatet container that has all the stack (Python+Flask+uWSGI+Nginx) and then just proxy the relevant requests to it.