Docker healthcheck and Laravel

Hi all

I’m running (docker compose up) a lot of containers and some are Laravel PHP applications.

My healthcheck is too basic using a simple “php --version” and it is not good at all because I get a healthy status too fast.

Indeed the container still need to run a lot of things like composer install, php artisan commands (like db seeding) and so on.

Did someone has already created a healthcheck to wait until laravel has finish his job ?

Thanks !

I have no idea of Laravel, but it seems this might look like what you want?

Thanks Meyay.

Thanks for having taking time to make some searchs to help me.

A few days ago I’ve also found that package when Googling and immediately skip it (avoid to install one more dependency) and then forget it. Perhaps is there no other solution anyway…

My very first idea was: with docker log I can see that the last line is (for a php-fpm image) something like ready to handle connections. Is it possible to do some grep on the log with healthcheck ? I suppose not… (log is accessible on my host, healthcheck is running in the container).

You would need to expose the docker socket to the container. I would not recommend doing so, unless your application actually needs to control the docker api.

It is cleaner to solve this inside your application and expose it as an endpoint that can be queried with curl or wget.

1 Like

I agree with @meyay, create a dedicated route in your application like /healthcheck, which only responds with status 200 when your application is up and running and the DB connection is established.

Use curl or wget as command in Dockerfile or docker-compose.yml for the healthcheck to call the route.

For sure you’re right (@meyay and @bluepuma77 ) but right now, I can’t. I’ve up to 10 containers (four different applications). I’ve two laravel containers, one PHP (no framework) and one NodeJS (frontend application). I can’t build a /healthcheck in both right now (and, on top, we’ve different developers).

My sub-project is to docker build / docker up everything on an intern server (not exposed on the Internet) and once all the 10 containers are up, run a command in project 4. That command will connect on a web page of project 3. The web page will call APIs exposed by project 2 and that project will connect, in the background (API too) on project 1.

I can’t stop my sub-project and create or wait for a new endpoint in both.

So, quick and dirty, I’ll exposed my docker socket and, from inside a container, run docker logs xxxx | grep -i yyyy and wait until the searched pattern is retrieved (in my case “ready to handle connections”).

I fully agree it’s not how it should be (I already have agreement of colleagues to implement a more robust approach) but, yes, in my use case and depending on available ressources in the project team, yes, I’m happy with the suggestion of meyay to expose the docker socket.

Thanks for the tip; I’ve not thought about it.

How do you determine the element you grep from the log is not present from a previous container start?

How do you plan to react if any component is temporarily not available (not started yet, container restart, temporary network issues,…) ?

Solutions that depend on startup order of other services and do require a permanent availability of those are not resilient by design.

How do you determine the element you grep from the log is not present from a previous container start?

My need: each night do a git clone / git pull of four projects on my server so I’ve the very latest versions of everything.

Then, I’ll jump in the four folder and run docker compose down --volumes to kill containers and associated volumes then docker build and docker up.

How do you plan to react if any component is temporarily not available (not started yet, container restart,
temporary network issues,…) ?

I’ll loop and check every five seconds during two minutes before stopping with a fatal error and a logfile with everything in it so, next day, someone can check.

Solutions that depend on startup order of other services and do require a permanent availability
of those are not resilient by design.

For sure, it’s not the cleanest way to do it but, I think, for my use case and in my condition (the new /healthcheck feature has to be added to the project plan and ressources allocated to it); I think it’s not a so bad solution right now.

Thanks for your time and sharing your thoughts.

1 Like

Fair enough: your approach indeed results in a clean log state.

I just wanted to make sure you have considered the implications, which you do :slight_smile:

1 Like

I just wanted to make sure you have considered the implications, which you do

Fair enough: your approach indeed results in a clean log state.

Your various answers and questions gave me a clearer picture of the problem and the (quick) solution I could come up with.

Thanks meyay

1 Like

Are you spinning up any databases? Because Laravel migrations won’t work if the database is not ready. I have also some Laravel projects in docker and I always have a health check on my database service.

In the docker-compose I also have containers depending on other containers, but at the end I don’t have to run composer install and so on, because this is done in my gitlab pipelines and my images are ready to go from the beginning.

Hello

Yes, two databases and yes the “solution” I’ve mentioned here above is the one I’ve used.

I’m then running all my scripts (up to 10 containers) and for Laravel ones, I’m listening the docker log until I see “ready to handle connection”. It means that, for Laravel, composer has been populated and php artisan commands like db:seed has been done successfully.

I’m using this since a few days and it works.

docker logs myservice_php | grep -i “NOTICE: ready to handle connections:” make the job.

Exposing the docker socket in the container just to read logs is not a good idea. If it works for you it is ok, but this isn’t best practice.

Best would be to have a health check for the application itself.

Laravel 11 for example has now a health endpoint out of the box.

You’re right, in fact, I’ve not exposed the socket : my script runs on a server and the script is responsible to start all the 10 containers so, on the server (and not in a container) I’m listening the log. No expose thus.

Yes, Laravel 11 comes with an embryon of healthcheck (a dummy route); everything has then to be done and, we didn’t have time / ressources for this right now.

The way I’ve implemented my script is robust I think and, today, it’s working as expected. We (the team) have already added a “healtcheck” need in our planning.