Running some extra services a tricky way

I’m using Docker for Synology, which is very messy.

The problem here is: I have a container acting almost as a virtual machine so I cannot simply redo it from a new image (it has a ton of configuration inside the container and making a new image is a massive hassle)

The problem is that the image of such container did not have service cron running by default. So every time I reboot the container, I have to go inside and activate it manually

I need a tricky way to start cron on load and I would like to see if anyone found a solution

I’m not a big pro on Docker, and I don’t know exactly how things work inside containers.

My first idea was, look for an init.d script for a service that was running on boot like MySQL and in the start scenario, simply add a “service cron start”

But this doesn’t seem to work. It doesn’t seem that Docker containers on boot use init.d scripts (??)

But I’m sure I could do some tricky method to sort this issue I encounter. If anyone has some other ideas, I would be grateful to hear them.

Have you tried searching for “docker multiple processes” on Google?

The first result is from the documentation:

I have a tutorial too. It was about Linux signals, but I mentioned supervisor.

Yes this is why I did a little introduction

I cannot create a new image, hence a new container, because the container is acting almost, as a VM. That solutions requires modifying the images Dockerfile, and recreating a new container from scratch.

Therefore the solutions I found there don’t work on my scenario.

I wanted to know how containers startup, to see if I can intercept, within the container, any file startup, and gracefully start the services, with a little pesky method.

After a lot of thinking I think this is the only method that may work in may case.

PS: I should have use any VM manager instead of Docker, but back in the day, we thought this was the easiest way to make the system we use working fast, because it already provided a preconfigured docker image. Now two years later we regret the decision, but this is what we have unfortunately :frowning:

I can give you ideas, but I don’t use Synology so I can’t really help you with that. I also have to note for future readers that I don’t normally recommend any of the below ideas. The best would be copying everything out from the container and creating a new image that contains everything. You will need to do that sometime in the future anyway, otherwise if anything happens with the container and can’t start it or somehow it gets deleted, you will lose everything that was in the container. Even if it was a virtual machine, you would need backups and possibly a mounted network filesystem independent of the VM. .

First idea: docker exec

If you had Docker CE on Linux, you could create a szkript which starts the container and use docker exec to start additional processes:

docker start mycontainer
docker exec mycontainer crond

Again, this is just an idea. I don’t know the required parameters of crond.

Problems:

  • I don’t know how you could run docker commands in a script on Synology
  • Even if you can run the script, when you stop the container only the original process would get the stop signal and everything else would be just killed which could lead to data loss or any kind of inconsistency…

Second idea: overwriting the original start script

Assuming the original image had CMD instruction like this:

CMD ["/app/start.sh"]

You could overwrite start.sh. That way you could also install supervisor and let it handle all processes. The start script would like like this:

#!/bin/sh

exec supervisord --nodaemon -c "/etc/supervisor/supervisord.conf"

This way thanks to the exec command, supervisor becomes the first process and can handle stop signals so the processes can finish their job before stopping.

Third idea: overwrite a binary

Let’s say there was no start script just a binary like:

CMD ["/usr/local/bin/application"]

Then you could rename application to application-orig and create a shel script without extension with the same content I recommended in my second idea. Of course you don’t have to use supervisor, but if you don’t use a process manager, you will still have problem with properly stopping the container.

Fourth idea: docker commit

You can save the current container as an image using

docker commit mycontainer myimage:2023-08-01

Then create a new Dockerfile and use that as a base image:

FROM myimage:2023-08-01

CMD ["supervisord", "--nodaemon", "-c", "/etc/supervisor/supervisord.conf"]

I hope you can find something helpful in the above ideas.

1 Like

Great ideas, thanks a lot for the detailed answer.

I’m going to test them all to see if I could make one working. I don’t even know which image is using the Synology’s Docker because it uses a somewhat custom mechanism for images (its a JSON but some slight differences and no identing or whatsoever, because i’ts not meant to be modified from a different place other than their GUI)

Example:

But inside that i’ve found an ENTRYPOINT

Not 100% sure, but maybe I could modify that entrypoint file and set there the script.

# cat /usr/local/bin/docker-entrypoint.sh 
#!/bin/sh
set -e

# Run command with node if the first argument contains a "-" or is not a system command. The last
# part inside the "{}" is a workaround for the following bug in ash/dash:
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=874264
if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ] || { [ -f "${1}" ] && ! [ -x "${1}" ]; }; then
  set -- node "$@"
fi

exec "$@"

Found another Entrypoint at /home/Shinobi/Docker/init.sh

And in the next line there is the CMD. The entrypoint is just a general script that executes the CMD and passes options to the node command.
You can change the entrypoint, but the value of CMD would still be passed to the entrypoint as an argument which you can ignore of course.

If you want to know more about the ENTRYPOINT and CMD: Constructing commands to run in Docker containers - DEV Community

I don’t know what that means. There can be only one entrypoint.

1 Like

But as you observe, it seems that the CMD is simply calling “node” binary. So probably the entry point its simply executing node. Your idea is to replacing node binary with a shell script called node and call both my script and node from there, right?

Also, I’ve tried adding some commands into the /usr/local/bin/docker-entrypoint.sh file, but they doesn’t seem to be executing.

I’m going to move into another point of your ideas

The entrypoint and CMD runs when the container starts. In fact they are just part of the final command which will be isolated. If you change it you still need to restart the container. If you put the additional commands after the exec command your commands will be ignored

1 Like

I put the commands before the exec in the entrypoint.
Here: