Next week, I’ll need to create a deployment script in Linux so I can git clone a few projects and run many docker compose build / up statements.
My current scenario is:
I’ll git clone four projects in four sub-folders
I’ll create my Docker network
In the correct sequence,
I’ll jump in the first folder, build images and run containers of project 1 (four containers)
Jump in the second, build images and run the next containers of project 2 (four containers)
Jump in the NodeJS folder, build the image and run the container of project 3 (one container)
Finally in the last PHP folder, build the image and run the container of project 4 (one container)
At the end, start a Linux script for some automation. That script requires everything is ready to use.
Some containers will take time to be up like the PostgreSQL one (very big database to load) and two PHP containers (they need to set up the entire project (composer, database seeding, …)).
So, my question : is this possible to “know” (and thus wait for) that all containers are ready to handle connections? (I think here to PHP-FPM container where the last line in the log is “NOTICE: ready to handle connections”; for a MySQL container, not sure, the last line is something like “socket: ‘/var/run/mysqld/mysqld.sock’ port: 3306 MySQL Community Server”)
I would be able to run the script during night and only run the final linux script when all containers are ready.
Thanks @bluenobly. I already used that attribute but it’s just for telling (so far I’ve understood) if a service depends on another one. It doesn’t provide the ability, in a Linux script, to know that “Ok, the container is ready to handle connection”.
@pankalog : Just tried this morning to start one of my project (here, three containers) and I see Starting as healthcheck status for a long time before seeing healthy while the container was already ready.
Do you think I should reduce the interval for Compose file version 3 reference | Docker Docs to get a faster update of the status? (actually set on 60 seconds) Is this a bad practice to set f.i. 15 seconds?
The following is described under the link in my previous post:
The solution for detecting the ready state of a service is to use the condition attribute with one of the following options:
…
service_healthy. This specifies that a dependency is expected to be “healthy”, which is defined with healthcheck, before starting a dependent service.
…
As I understand it, you can make the behavior of “depends_on” dependent on “healthcheck”. Right?
Oh… Nice here too. I didn’t see the service_healthy condition. Could be really nice (when I should garantue that my DB is fully loaded before running my PHP container f.i.). Wasn’t aware of this. Thanks.
Until now, I’ve managed this in a Linux script in my different containers. I’m doing some netcat in my containers like f.i.
until nc -z -v -w30 "${env['DB_HOST']}" "${env['DB_PORT']}"; do
echo "Info: Waiting for database connection...)"
sleep 5
done
but that part is probably no more needed.
Back on my original question, I should be able in a Linux script, to be able to detect everything is up and running. I should then use, like you said finally, the healtcheck status of all containers.
I should be able to do this with docker ps. I already have this in one of my script:
docker ps -a --format "{{.Names}}" | while read -r container; do
healthcheckStatus=$(docker inspect --format='{{json .State.Health}}' $container | jq -r '.Status')
if [[ $healthcheckStatus == "healthy" ]]; then
# green
healthcheckStatus="${healthcheckStatus}"
elif [[ $healthcheckStatus == "null" ]]; then
# gray
healthcheckStatus="${healthcheckStatus}"
else
# red
healthcheckStatus="${healthcheckStatus}"
fi
printf "%-60s%s\n" "$container" "$healthcheckStatus"
done
So I think; right now, I’ve everything to start the automation script.
Oh sorry, I didn’t realize that the script was running on the host. I thought the script is in a container that is started last. Theoretically, you could solve your requirements with a single compose file that resolves all dependencies with deepends_on and healthcheck if your script can run in a separate container. But I’m sure I don’t understand all your requirements.
I need to run four differents projects (four repositories, four sub-folders, plenty of docker-compose files, …). On the host (which will be a Linux server), I need to automate the building & up process (docker compose up to stay concise). With depends_on and service_healthy I’ll be able to make sure one project will be build correctly but, outside Docker, with my Linux script running on the server, I then must wait until all containers are healthy. And this for all projects.
Once the last one has been fired correctly, then I will be able to run a docker run statement in that container.
To give more in-depth details: that last container (project 4) is a functional test tool called Behat. The tool will start chrome and connect to a web interface (my NodeJS project; project 3). The web page will make API calls to a PHP application (project 2) and, finally, the API backend application will make curl statements to my first project. So I should be sure every one will be ready before starting my script.
Very interesting. Now I understand it a bit better. Unfortunately, I haven’t had enough time yet, but I wanted to build something similar with Jenkins. I looked at different approaches on how to build a CI/CD pipeline with Jenkins. I will take a closer look at Behat in this context. Thanks for that.