Dockerized PHP Application Architecture Best Practices

Dear Community,

I’m pretty new do Docker. I played a lot with Docker in my development environment but I tried to deploy real app only once.

I’ve read tons of documentations and watched dozes of videos but still have a lot of questions.
I do understand that Docker is just a tool that can be used in so many different ways, but now I’m trying to find the best way to develop and deploy web apps.

I’ll use real PHP App case to make my question more concrete and practical.
To keep it simple let’s assume I’m building a very simple PHP App so I’ll need:

  1. Web Server (nginx)
  2. PHP Interpreter (php-fpm or hhvm)
  3. Persistent storage for SESSIONs

The best example/tutorial I could find was this one year old post. Dylan proposes this kind of structure:

He use Data Only container for the whole PHP project files and logs and docker-compose to run all this images with proper links. In development env I’ll mount a host directory as a data volume and for production I’ll copy files directly to Data Only Images and deploy.

This is understandable. I do want to share data across nginx and php-fpm. nginx needs access to static files (.img, .css, .js…) and php-fpm need access to PHP files. And both services are separated so can be updated/changed independently.

Data only container shares a data volume that is linked to nginx and php-fpm by --volumes-from option.

But as I understand - there’s a problem with Data Only containers and -v flag.
Official Docker Documentation says that data volume is specially-designated directory to persist data! It is said that ‘Data volumes persist even if the container itself is deleted.’. So this solution is great for data I do not want to loose like Session files, DB storage, logs etc… But not for my code files, right? I do want to change my code files. I want to deploy changes without rebuilding nginx and php-fpm images.

Another problem is when I tried this approach I could not deploy code changes until I stopped all running containers, removed them and their images and rebuild them entirely. Just rebuilding and deploying Data Only images did nothing!

I’ve seen some other implementations when data is stored directly in Interpreter container, but it’s not an option because I need nginx to have access to these files also.

The question is what is the best practices on where to put my project code files and how to deploying changes for this kind of app?



+1 on this one… And to add a few things to the question:

  • Is there any real benefit to having the containers laid out this way as opposed to php+nginx conttainers? (yes I know, I dont need php for static files, but then I could have nginx only and nginx+php container flavors…)
  • What (if any) is the real benefit of having php and the webserver in separate containers?
  • Has anyone deployed to production like this, and have you had cases of scaling the services independently (ex: 1 nginx -> 3 php containers, or viceversa in order to use each container to its fullest)?

Data volume are for persistence and sharing so they could be use to horizontally scale your service as well as keeping your data

horizontal scaling and separation of concern. You don’t have to deal with the configuration of nginx with php. updates/upgrades are easier if you just deal with one thing. Also you can decide to replace nginx by haproxy.

many times, but most of the time nginx was used as a reverse proxy not load balancing. However I used once :

reverse_proxy (vhost) → nginx (conf) —> (2)php-fpm —> mysql

the inbetween nginx was holding specific configuration I didn’t want to be with the reverse proxy and it makes it easier to load-balance at this level but I could have done it with a php-fpm+nginx container If i had only one php-fpm container.

Hi David, thanks for your time! I’ll try and answer all points with some addtl questions/clarifications…

  • Right, agreed on the application/code container… one option is also usnig config file containers…what do you think? any cons?
  • We are using haproxy as the loadbalancer… Nginx is solely webserver for static content, as well as for serving PHP from php-fpm. Load balancing between webservers, ssl termination, etc, is already in haproxy.
  • About using “standard” images, then it would be wise/necessary to add configuration containers in order to add the specific nginx/php-fpm confs.
  • We use haproxy (lb) --> nginx web --> php-fpm --> haproxy (internal) --> Db/Redis/Etc

I understand that adding conf containers would maybe be a good way so we can use standard nginx/php-fpm containers… what do you think? In terms of php, how would we add particular build modules, phpredis (which needs to be compiled, composer, etc?) any experiencies anyone wishes to share?

Thanks for all comments and ideas!!

This is a very thoughtfully considered question, and it’s frustrating that no one has come along to answer. It seems rather fundamental to application architecture. Something I would think most people running a Dockerized application should encounter, not some edge case.

For what it’s worth @oleynikd, I think Dylan is on the right track. I have a similar need: a PHP-based app running on nginx + php-fpm, and the need to share a single, canonical codebase amongst several processes. Hard to imagine this problem hasn’t been solved, but I just can’t seem to find any discussion of it anywhere.

Docker really needs a solution to allow sharing non-persistent data between amongst multiple containers.

Hello @klsmith,

I have not tried this, but might be worth a go.


I’m starting to come around on the idea of “single purpose/service container” instead of “single process container”, since in truth every container is already running many OS processes to get its job done. And narrowing the focus to “single purpose” instead of “single process” seems both simpler to maintain (requiring far fewer hacky solutions to make it work) and more logical in its architecture.

1 Like

I agree with the single purpose container. Most of the time you won’t have a choice, like Nginx master spawning workers.

One process per container is a huge misnomer IMO. Having nginx/apache + the mod for PHP and PHP source code in one container seems like the right way to do it, you need to make sure your app is stateless though. I’d bake in the PHP code to the image vs. storing anything in data volumes too.

If you need to share layers (files) across images, use a common FROM base in your Dockerfile, e.g., FROM nathanleclaire/phpbase:0.0.1.

1 Like