I currently run a number of web applcations on my Linux-based home server, with Nginx serving the applications as separate websites using server blocks. The sites are all just private low-traffic household stuff, mostly written in Python/Flask and a few PHP/Yii2. For the Flask sites I use Python’s virtual environments for version and dependency management, which is working well enough albeit a bit untidy.
I’m planning to update the server to a new version and would like to take the opportunity to “Dockerise” the applications, which I figure should make things a bit cleaner. I’m new to Docker but I’ve managed to get a few of my applications running successfully in containers. Each container has an application server (gunicorn/php-fpm) that’s exposed to Nginx on the host, acting as a proxy. I’m not sure how best to handle the static assets though (css files, images etc.).
I’m currently exposing these static files via a mounted Docker volume with the host’s Nginx configured to serve them directly. That works, but it feels rather inelegant. It occurs to me that another way might be to give each container its own instance of Nginx. That could then internalise such configuration details within the container, so the host Nginx would simply pass all requests straight through to the container’s Nginx and remove the need to expose files via a mounted volume.
More elegant, perhaps, but would it be horribly extravagant to have a copy of Nginx in each container? My first thought was that it would be very wasteful, but I guess trade-offs between duplication and encapsulation goes to the core of containerisation and Nginx is reckoned to be very lightweight. I’m sure many others must have been through these same thought processes before, but I haven’t yet found much discussion about it. Is there any consensus on the best approach for this sort of situation - or am I going about it in entirely the wrong way?