Sharing files between nginx and php-fpm tasks in a Docker swarm

I have a PHP application that also includes some static files (images, scripts, etc.). I want to deploy this application to a Docker swarm – how do I ensure that the nginx tasks have access to the application files (which I assume would be built inside the php-fpm container)?

I thought about using a volume that both containers share. From what I understand, Docker would copy the files from the php-fpm (app) container into the volume, and nginx would then be able to access them as well. However, there are multiple issues:

  • The nginx and php-fpm tasks might not be on the same node, so the volume would be empty on an nginx-only node.
  • Updating the php-fpm (app) image might not update the volume files – if I understand correctly, Docker won’t update files in the volume from files in the container if the volume is not empty/new.

This is surely a fairly common setup – how is it usually handled?

Some alternatives that I’ve thought of:

  1. Create a volume that is not tied to a specific node, i.e. one that uses a non-local volume driver for example.
    • Seems overly complicated, and may be difficult to update with application changes.
  2. Find a way to ensure that nginx tasks are always deployed along with php-fpm (app) tasks.
    • As far as I know, this is not possible, and would also have the volume update issue.
  3. Put nginx and php-fpm (and the app files) in the same image.
    • This goes against Docker best practices, wouldn’t scale well, etc.
  4. Create an nginx image with static files, and a php-fpm image with PHP files.
    • The application is in a single repository, and I was hoping to have a single Dockerfile to build the application image.
    • This would require building two images from the same repository, and making sure that each one gets only the files that it needs (ideally).
  5. Use Apache instead of nginx, which has a single image with support for PHP (as a module).
    • I would prefer to continue using nginx.
  6. Configure nginx/php-fpm to serve the static files through PHP.
    • Not ideal for performance reasons, but if there’s a CDN in front of the files anyway, then very few requests would be routed to PHP.

I don’t really like any of these options, but I’m beginning to think that there is no clean and standard way of doing this. I hope that I’m wrong.

Have you tried using a database as a backend with an overlay network?
I can also imagine some kind of file-distributing webserver with an overlay network but this is kinda hacky.

imho : Swarm is useless without a global storage option

Installing a docker storage plugin in swarm is a piece of cake.
What might become an issue is providing the backend for these plugins. I for one use cephfs for your use case (the plugin is very easy to setup. Setting up a ceph cluster with a correct cephfs configuration, not so much). You might want to look to use the NFS plugin which is by far the easiest to setup backend wise.

I dont see why this might make updating the apps difficult. It’s not very different from running on a single node with a local volum.

Imho, that’s the best option

I don’t know what this means. Why would you use a database to make certain files available across multiple task containers?

This is just for a few static files (images, scripts, styles) in the web application, I don’t need a full-blown CDN for this.

The files would be inside the container (image). On the first deployment, with an empty volume, the files would be copied to the volume. On the next deployment, with a non-empty volume, updated files would not be copied to the volume, so nginx would not have access to the latest changes. Is that not correct?

When I understood you correctly you want a (few) php containers, which contain some static files. You also need a (few) nginx containers, which have access to the same files as the php container.
Since you deploy those containers to a swarm you can’t be sure on which node they will be deployed so a normal volume won’t work.

But if you create an overlay network, which spans all nodes and attach a database server to this overlay network every container on any node can access this database.
So when you place your static files inside this database your php and nginx containers can access the static files and you have only one place where you need to update your static files.

If you use a simple flask server which answers to simple http requests it’s not that resource heavy and complex.

I’d prefer to keep the files in a regular file system, and not inside a database. In order to serve the files from a database, I’d need to use PHP anyway, so I may as well just go with option #6 above.

I’m not sure why I’d use a Python server instead of just using nginx. The problem is not how to serve files, but how to give nginx access to files inside a php-fpm container.

Docker would not do it for you. So it’s realy up to you and what you set in your startup script. Here is an example of what I do :

The stategy here is simple : I add a “version” file within the data files and I copy only if the file is missing (1st deployment) or if the file differ (updated deployment)

So the entry-point script runs within the container at startup, and has access to the volume (which itself is distributed). The application code is located outside the web root (volume), and is only copied into it when changes are detected (using a version file).

I suppose this could be an option. I have some concerns:

  1. If multiple containers are starting up at exactly the same time, they would both detect a version change and proceed to copy the files?

  2. It would change the behavior of rolling updates? Instead of task containers being updated one (or a few) at a time, after the files are copied, all task containers would immediately be using the updated code from the shared volume (but the containers themselves may differ – for example you may have added code that requires the PHP Redis extension, but the not-yet-updated containers may not have it installed until the rolling update is complete).

  3. You would want to prevent this behavior in local environments where a bind mount is used for development, to prevent local files from being overwritten when the container(s) start.

Interresting feedback :slight_smile:

  1. it might indeed happen. But the probability is very low, and the risk of a borked file is even lower (I would say null) because of the way linux handle filesystems
  2. You got this right. That’s a potential issue… that will last a few minuts. Rolling downgrade would have the same behaviour too
  3. yup, it’s up to you to find a solution (testing for the presence of a file “dev_env” (or anything) would make it easily)

Thanks for your input. I’ll have to weigh up the pros and cons of each solution.