How to run a cron job inside a container (Alpine)

Hi there,

I have a script running everyday at midnight that computes leaderboards. Here is the interesting part of the Dockerfile:

FROM alpine
...
RUN mkdir /etc/periodic/midnight
ADD ./scripts/compute_leaderboards /etc/periodic/midnight/compute_leaderboards
RUN chmod +x /etc/periodic/midnight/compute_leaderboards
RUN crontab -l | { cat; echo "00    00       *       *       *       run-parts /etc/periodic/midnight"; } | crontab -
CMD ['crond', '&&', 'gulp']

When running through exec I get the expected behaviour:

$ docker exec -ti server_web_1 crond -f
Wed Mar 23 2016 10:32:00 GMT+0000 (UTC) Starting computation of leaderboards...
Wed Mar 23 2016 10:32:09 GMT+0000 (UTC) Done computing leaderboards.

I suspect the last line of my Dockerfile to be the culprit, but did not find the trick yet…

Am I missing something ?

It runs the cron daemon, with additional arguments “&&” and “gulp”. When the cron daemon exits, the container will exit.

Probably I would try to restructure this to use the host’s cron to “docker exec” the script. If I was feeling more ambitious, I’d break the script out into its own container, use a shared data volume to ensure it could access the same data.

@dmaze has it – your ENTRYPOINT is not set correctly. If you are really set on running crond itself in a container, you will need to get it to block and not exit until it’s ready to run the job.

Hi @nathanleclaire,

That’s what I understood as well, which is why I used crond so it could start the daemon in the background.

However it seems like crond is being killed if it is not run in foreground mode…

Do you know of any other way to have two processes running in the same container ?

Cheers,

Chanto

Thanks a lot @dmaze,

If I can’t find a way to have it run in the same container, I’ll wrap it up in its own container as you suggested.

Cheers,

Chanto

@cadesalaberry You don’t need to run crond, just cron -f (-f for “foreground”, I’m assuming). The Docker daemon handles the daemonization of that process, just like if you ran a web server in a container (you can see it in subsequent invocations of docker ps after running it).

Running crond, as you said, immediately forks the process into the background and causes the container to exit (at the time of writing, PID1 does not wait on its child processes inside of Docker containers).

Alternatively, you could do something like cron && tail -f /var/log/my.log to ensure that docker logs will return the output of what’s been written to the log file while the container is running.

1 Like

The general solution to running more than one process is to run a shell.

CMD ['sh', '-c', 'crond && gulp']

… or put your stuff in a separate script file and chmod +x it:

#!/bin/sh
crond
gulp

and run it with CMD ['/yourscript'] (the path needs to be correct of course; this assumes you saved the script as /yourscript and properly set permissions, perhaps in the environment where you build the container in the first place before you COPY in the file).

I found a full working example here.

Dockerfile example:

FROM alpine

# Copy script which should be run
COPY ./myawesomescript /usr/local/bin/myawesomescript
# Run the cron every minute
RUN echo '*  *  *  *  *    /usr/local/bin/myawesomescript' > /etc/crontabs/root

CMD ['crond', '-l 2', '-f']

An alternative to running con jobs (if you only care about the interval between the runs) could be a restart policy:

Sample docker-compose.yml:

version: '3.3'
services:
  letsencrypt:
    image: czerasz/letsencrypt-companion
    deploy:
      restart_policy:
        condition: any
        # Run every day
        delay: 1d

  ...
3 Likes