I am new to docker and I have inherited a set up that I have to maintain. The instructions left by the person who quit say to first do a docker run, then remove a lock file, start a database server, and finally run a python script.
I want to set it up so that all that happens when the machine boots. I’ve read about the the --restart flag on Docker run, and I have these questions:
-would specifying that cause the container to start at boot time?
-if so, how does that work - i.e. where is the flag stored so it knows to restart at boot?
-if not, how can I start it at boot?
-I would guess that only starts the container. How would I start everything at boot?
Containers with a restart policy will follow that policy when the docker daemon is launched. As long as your system launches the docker daemon on boot, your containers will come up according to their restart policy. The restart policy is part of the container metadata that is created when the container is created.
I’m not sure what you mean by this:
I would guess that only starts the container. How would I start everything at boot?
Hi, you can also create a start-up script for Docker images, so it is integrated with systemd, upstart or whatever your distro of choice uses. The local init-System will then take care of your images.
I mean everything that I also have to do besides the ‘docker run’ to make the app usable (remove the lock file, start the database server, and run a python script).
Thanks for the reply. I read about ENTRYPOINT but I found the docs very confusing. I have a Dockerfile and at the end I put 3 RUN commands for the 3 commands I have to execute after I do the docker run. But they did not seem to get executed. Was that the correct way to do it? If not how would I do it?
Then when I do a docker run on this image, docker will execute /entrypoint.sh date.
The entrypoint script will see date as the argument, and due to the exec "$@" line, it will call exec date. The date command will be launched and do its thing.
Between the #!/bin/bash and exec "$@" line, I can put anything else in that script. I could have the script attempt to contact a database, load data, etc.
Take a look at the Dockerfile and the docker-entrypoint.sh. The entrypoint script there will look for a wordpress install in a volume, and if it isn’t there, it will actually install wordpress to that location. It’ll also try to talk to the wordpress database before finally executing the apache command.
Nothing is output at all. I don’t think it’s running the script.
One other thing - when I run capdata.py by hand I have to enter Ctrl-p Ctrl-q or else capdata.py exits when I exit the docker shell. I’ve tried running it in the background, but it still exits. So once I do get the entrypoint script working I have to also deal with that issue.
If I understand this correctly, this mounts /home/lmartell/capdata/ as /home/elucidbio/src. Now, the Dockerfile is in /home/lmartell/capdata/ but then elucidbio/capdata is given and that does not exist. In the mounted volume the Dockerfile is in /home/elucidbio/src. So I’m confused about exactly what this command means and does.
This command is launching a container using the elucidbio/capdata image. If you have a Dockerfile, then you will need to perform the correct docker build step to turn it into an image. Once you have the image, then you will do a docker run command that references that image.
A Dockerfile specifically is a set of instructions that you feed into the docker build command. If you change the Dockerfile, you’ll need to run the docker build command for it again.
So I did a docker build and it spewed a lot of errors building numpy, but then at the end it said it successfully built numpy. Then when I did a docker run on that image it did run the entrypoint file and execute the commands in that, and the app started and seems to work fine. I am concerned about the build errors though.
But putting that aside for a moment, I still have these questions:
-After the entrypoint file runs it drops me into a shell, and if I exit that shell the container shuts down. Before, when I was running the commands manually I would do Ctrl-p Ctrl-q to get out of the docker shell properly without killing the script. How can I do that with the set up I have now?
-To get this all to happen when the machine boots, is the only thing I have to do is add --restart=always to my run command?
Containers are a fancy way to run a process. Whatever command you tell it to run is the pid1 of that container. once that process exits, the container is considered to be stopped.
Since it probably isn’t a shell that you want running in the background, what command would you like to continue to be running? maybe pass that in instead of a command to start a shell.
What ctrl+p ctrl+q does is detach your docker client from the running container, and the container continues to run. You can start a container in detached mode by passing in the -d argument to the docker run command.
You can indeed set a restart policy. Personally, I tend to use --restart=unless-stopped instead of --restart=always.