Best use case for nginx + fpm webapp


We run a Yii2 PHP webapp in Docker.

At first, for simplicity, we only had one container running both nginx and fpm (using supervisord), almost the same as running the webapp on a server…

For sake of load balancing and scalability, we are now using the vanilla nginx image (with a template and a customization script) for the frontend, and a webapp image based on FPM one and including the app source code.

The webapp is 99% PHP, and 1% is served directly from the web directory by nginx. That dir is the vhost webroot, containing the index.php entry script and static content (js/css). This static content can be either from repo or generated (published) by the PHP/webapp container.

At this stage, the compose project file shares the web subdir from the php service to the nginx one using a volume, which I need to delete and recreate after an upgrade, to make sure the static files are refreshed (not generated ones).

My concern is about this shared directory: I was wondering if there’s a better approach for allowing the nginx frontend to read files which are only on the backend service.



For your scenario, where you have a Yii2 PHP webapp running in Docker and you’re looking to optimize the sharing of static files between the nginx and PHP-FPM containers, there are several approaches you can consider:

Shared Volume: Continue using a shared volume but automate the process of refreshing static files after an upgrade. This can be done by scripting the deletion and recreation of the volume as part of your deployment process.

Build-Time Inclusion: Include the static files directly in your nginx container at build time. This way, whenever you build a new version of your nginx container, it will contain the latest version of the static files.

Continuous Deployment: Implement a continuous deployment pipeline that automatically updates the static files in your nginx container whenever changes are pushed to your repository.

Separate Static Files Service: Consider serving static files from a separate service or container that’s dedicated to static content. This could be another nginx container or a different static file server that’s optimized for serving static content.

Object Storage: Use an object storage service like Amazon S3 to serve static files. This can reduce the load on your web servers and simplify the deployment process since you only need to update the files in the storage service.

Cache Control: Implement cache control headers for your static files. This can help reduce the need for frequent updates to the nginx container, as clients will cache the static files locally.

Symlinks: Use symbolic links in your nginx container that point to the shared volume. This can help with the management of static files and make it easier to update them without having to rebuild the container.
Each of these approaches has its own set of trade-offs, and the best solution will depend on your specific requirements and constraints.

I hope the information may helps you.