Auto starting a container on system reboot

I am new to docker and I have inherited a set up that I have to maintain. The instructions left by the person who quit say to first do a docker run, then remove a lock file, start a database server, and finally run a python script.

I want to set it up so that all that happens when the machine boots. I’ve read about the the --restart flag on Docker run, and I have these questions:

-would specifying that cause the container to start at boot time?
-if so, how does that work - i.e. where is the flag stored so it knows to restart at boot?
-if not, how can I start it at boot?
-I would guess that only starts the container. How would I start everything at boot?

Thanks!

Containers with a restart policy will follow that policy when the docker daemon is launched. As long as your system launches the docker daemon on boot, your containers will come up according to their restart policy. The restart policy is part of the container metadata that is created when the container is created.

I’m not sure what you mean by this:

I would guess that only starts the container. How would I start everything at boot?

1 Like

Hi, you can also create a start-up script for Docker images, so it is integrated with systemd, upstart or whatever your distro of choice uses. The local init-System will then take care of your images.

Kind Regards,
Beet

I mean everything that I also have to do besides the ‘docker run’ to make the app usable (remove the lock file, start the database server, and run a python script).

Anything like that that needs to get run after you issue a docker run should be handled by an ENTRYPOINT script.

Thanks for the reply. I read about ENTRYPOINT but I found the docs very confusing. I have a Dockerfile and at the end I put 3 RUN commands for the 3 commands I have to execute after I do the docker run. But they did not seem to get executed. Was that the correct way to do it? If not how would I do it?

RUN in a Dockerfile specifies commands that will be run during the docker build step, not at docker run.

An ENTRYPOINT script is something that will take in your CMD as an argument and then figure out how to deal with it.

For example, if I have the following in an entrypoint.sh file:

#!/bin/bash
exec "$@"

and I have the following in my Dockerfile:

ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["date"]

Then when I do a docker run on this image, docker will execute /entrypoint.sh date.

The entrypoint script will see date as the argument, and due to the exec "$@" line, it will call exec date. The date command will be launched and do its thing.

Between the #!/bin/bash and exec "$@" line, I can put anything else in that script. I could have the script attempt to contact a database, load data, etc.

Here’s an example from the official library. The wordpress official image: https://github.com/docker-library/wordpress/tree/master/apache

Take a look at the Dockerfile and the docker-entrypoint.sh. The entrypoint script there will look for a wordpress install in a volume, and if it isn’t there, it will actually install wordpress to that location. It’ll also try to talk to the wordpress database before finally executing the apache command.

1 Like

Thanks. again This makes it much clearer, but I still cannot get it to work.

This is the run command I use

sudo docker run -v /home/lmartell/capdata/:/home/elucidbio/src -v /projects/cap_data:/home/elucidbio/data -t -i elucidbio/capdata bash

At the end of my Dockerfile (which is in /home/lmartell/capdata/) I put this:

ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT [“/entrypoint.sh”]

And in my entrypoint.sh file I have this:

#!/bin/bash
rm -f /home/elucidbio/data/knowledge/system.lock
/home/elucidbio/stardog/bin/stardog-admin server start
/home/elucidbio/src/api/capdata.py
exec “$@”

But when I do the docker run these commands are not being executed. What I am doing wrong?

Add a set -x line to your script so that bash will echo out the commands it runs.

Nothing is output at all. I don’t think it’s running the script.

One other thing - when I run capdata.py by hand I have to enter Ctrl-p Ctrl-q or else capdata.py exits when I exit the docker shell. I’ve tried running it in the background, but it still exits. So once I do get the entrypoint script working I have to also deal with that issue.

I’m trying to understand the run command used:

sudo docker run -v /home/lmartell/capdata/:/home/elucidbio/src -v /projects/cap_data:/home/elucidbio/data -t -i elucidbio/capdata bash

If I understand this correctly, this mounts /home/lmartell/capdata/ as /home/elucidbio/src. Now, the Dockerfile is in /home/lmartell/capdata/ but then elucidbio/capdata is given and that does not exist. In the mounted volume the Dockerfile is in /home/elucidbio/src. So I’m confused about exactly what this command means and does.

This command is launching a container using the elucidbio/capdata image. If you have a Dockerfile, then you will need to perform the correct docker build step to turn it into an image. Once you have the image, then you will do a docker run command that references that image.

So since I modified my Dockerfile to add the entrypoint, I have to create a new image now?

A Dockerfile specifically is a set of instructions that you feed into the docker build command. If you change the Dockerfile, you’ll need to run the docker build command for it again.

So I did a docker build and it spewed a lot of errors building numpy, but then at the end it said it successfully built numpy. Then when I did a docker run on that image it did run the entrypoint file and execute the commands in that, and the app started and seems to work fine. I am concerned about the build errors though.

But putting that aside for a moment, I still have these questions:

-After the entrypoint file runs it drops me into a shell, and if I exit that shell the container shuts down. Before, when I was running the commands manually I would do Ctrl-p Ctrl-q to get out of the docker shell properly without killing the script. How can I do that with the set up I have now?

-To get this all to happen when the machine boots, is the only thing I have to do is add --restart=always to my run command?

Containers are a fancy way to run a process. Whatever command you tell it to run is the pid1 of that container. once that process exits, the container is considered to be stopped.

Since it probably isn’t a shell that you want running in the background, what command would you like to continue to be running? maybe pass that in instead of a command to start a shell.

What ctrl+p ctrl+q does is detach your docker client from the running container, and the container continues to run. You can start a container in detached mode by passing in the -d argument to the docker run command.

You can indeed set a restart policy. Personally, I tend to use --restart=unless-stopped instead of --restart=always.

I got:

docker: invalid restart policy unless-stopped.

That restart policy was added in Docker 1.9.0. What version of Docker are you using?

$ sudo docker -v
Docker version 1.7.1, build 786b29d

I would definitely recommend you install the latest version. 1.11.0 was just released yesterday.