Creating image for php app with multiple purposes

Hi.
I’ve created an image based on php-fpm embedding the source of a php application, later managed with composer.

I’ve a question for best practices: the application needs other processes running (workers…), and I was going to implement it with separate services. My question is: is it better to create another image, based on php-cli, and run the services from there, or start services from a light image running commands via docker compose run or ssh?

Thanks

What is a worker in this context? Are they PHP FPM worker processes? PHP FPM can run its workers, you don’t need to create new containers for that. Processes inside containers are allowed to fork new processes, threads. When we say you should run only one process in a container, we only mean that you should not think of a container as a virtual machine in which you usually have systemd to handle as many applications as you need. So the rule is not having only one process in a container, but running only one application. Sometimes you may want to ignore that rule and use a process manager like s6-overlay or supervisor as an alternative to systemd which is usually not recommended to run in containers.

Definitely not SSH. If you need to run a command in an already running container, you can use docker exec. SSH would be an other service. If you can run that, you can run the service for which you installed SSH. However, I am not sue I understood the question. Why “docker compose run” or ssh"? How is SSH alternative to docker compose run?

With workers I mean php commands I usually run as systemd services or via cron.

I was thinking about the approach I found here, where he runs a container with cron, which then reloads nginx (which is another container) via docker compose exec.

Indeed, using a service manager in the same image could be easier, but if in the future I’d like to scale, I’d then have more than one worker running for the same purpose. Using a different service for workers could be more appropriate in this case, what do you think?
In this case, using docker exec seems the more appropriate solution.

Otherwise, I’d have to create an image from php-cli and run commands from there. This is probably the best solution IMHO, but what refrains me is that I have to rebuild almost the same image twice, one for FPM and one for CLI. Maybe I should produce the FPM image on top of the CLI one. But then I’d have to figure out how to install FPM from the CLI, as I haven’t found the Dockerfile.

You mean systemd timers as an alterntive to cron or systemd for constantly running services?

The problem is that I still don’t know what those workers are doing. You wrote that the workers are “php commands”, but this is not enough information to decide. You need t think about what you gain by running those commands in separate containrs. Is it just complicates the maintenance to imlement this best practice, or has an actual value for you.

PHP FPM images contains the CLI. I created my own PHP FPM images on Docker Hub, but I very rarely used a cli version of the official PHP images. Only when I wanted to demonstrate something and I didn’t want to confuse the audience.

The link to the supported tags is in the description of the official PHP image on Docker Hub. Here is a direct link to the list:

https://github.com/docker-library/docs/blob/master/php/README.md#supported-tags-and-respective-dockerfile-links

Each tag is a link to a Dockerfile.

I do have both workers started with systemd and scripts scheduled with cron.

All of these are PHP scripts.

:man_facepalming: thanks!

In the end, the best option is probably to have a separate service image running something like you suggested (s6 or supervisord) which calls all the required scripts.

Yes, I know. Repeating it will not help me understand it better :slight_smile: I can run a PHP script which does <?php echo "hello"; ?> and an other which uses 4 gigabyte of RAM and running for an hour. If everything is running in one container, memory and cpu limits at the application level or in the process manager, since each service can have different requirements. It means Docker will not will not help you when the application or the process manager has a bug and uses too much memory or cpu, unless you set the limits for the container too so at least other containers can have enough resources.

An other requirement can be to update different components independently of eachother. You can do that with multiple containers but not multiple services in one container.

An other point of view can be security. If you want to make harder for an attacker to access data, you run everything in separate containers and mount the data only that that container needs or set some firewall rules or rules in the application config so only specific containers can access it via HTTP.

This is just an example, but this is why I wrote I didn’t know what those scripts were doing, because that can be important before you decide which solution is the best.

You don’t need to share it with us, just keep in mind what I wrote above. :slight_smile:

1 Like