It runs the cron daemon, with additional arguments “&&” and “gulp”. When the cron daemon exits, the container will exit.
Probably I would try to restructure this to use the host’s cron to “docker exec” the script. If I was feeling more ambitious, I’d break the script out into its own container, use a shared data volume to ensure it could access the same data.
@cadesalaberry You don’t need to run crond, just cron -f (-f for “foreground”, I’m assuming). The Docker daemon handles the daemonization of that process, just like if you ran a web server in a container (you can see it in subsequent invocations of docker ps after running it).
Running crond, as you said, immediately forks the process into the background and causes the container to exit (at the time of writing, PID1 does not wait on its child processes inside of Docker containers).
Alternatively, you could do something like cron && tail -f /var/log/my.log to ensure that docker logs will return the output of what’s been written to the log file while the container is running.
The general solution to running more than one process is to run a shell.
CMD ['sh', '-c', 'crond && gulp']
… or put your stuff in a separate script file and chmod +x it:
and run it with CMD ['/yourscript'] (the path needs to be correct of course; this assumes you saved the script as /yourscript and properly set permissions, perhaps in the environment where you build the container in the first place before you COPY in the file).
# Copy script which should be run
COPY ./myawesomescript /usr/local/bin/myawesomescript
# Run the cron every minute
RUN echo '* * * * * /usr/local/bin/myawesomescript' > /etc/crontabs/root
CMD ['crond', '-l 2', '-f']
An alternative to running con jobs (if you only care about the interval between the runs) could be a restart policy:
# Run every day