Keep running shell script inside image and keep docker running

I have a requirement to run 2 scripts inside docker. This script shall start at the start of the container and shall keep running (scripts expose certain ports which I need to expose). I tried different ways but somehow unable to figure out how can this be achieved.

Base Image used CentOS

CMD "cd /home/xyz/; ./script1.sh ;
CMD "cd /home/abc/; ./script2.sh ;

EXPOSE 8086
EXPOSE 8081

Assuming that your script is present in your image you want something like this

EXPOSE 8086
EXPOSE 8081

WORKDIR /home/abc

CMD ["sh", "script1.sh"]

You can only have ONE CMD in your Dockerfile. If you have multiple only the latest will be used. In your case cd /home/xyz/; ./script1.sh will never be executed.

Your might want to create a script that start both of your scripts in parallel. Take a look at this link to see how it’s possible under linux. How to run command or code in parallel in bash shell under Linux or Unix - nixCraft

Thanks a lot. One more thing if you can help me with. I need to keep this script running all the time i.e. during the life of the container. How can that be done?

CMD [“nohup”, “script1.sh”, “>myscript.log 2>&1”, “&”] Is this correct?

I think your CMD should look more like this
CMD [“nohup”, “script1.sh”, “>", "myscript.log", "2>&1”, “&”, "&&", "sh", "script2.sh"]

Your command will start your script in the background and therefore docker will think the container is done and stops it. You always need a process to run on the main bash instance.

Thanks alot for your help.

My suggestion is to create a third script (runner.sh) and it would be the only command that runs.

CMD ["./runner.sh"]

Your attempt to use multiple arguments is prone to errors. If it works great, but that command is mixing multiple things together:

  1. (1) Backgrounding a process,
    (2) Redirection of output (stderr to stdout)
    (3) Verifying one command worked before running the other,
    (4) Running a 2nd command.

I would not try to figure out how docker CMD works to do this but instead use a bash script that does the work. This approach gives you the ability to attach to the container and debug it by running the command if there was any error.

Lastly, when I’m troubleshooting I put the command sleep 900 to sleep for 15 minutes which gives me time to docker exec bash into the container to debug it (for up to 15 minutes) before the execution of the container terminates.

#!/bin/sh
 ./script1.sh > myscript.log 2>&1 & 
status=$?
if [ "$status" -eq "0" ] 
then
    ./script2.sh
fi
# Sleep when debugging the script, comment out after it is working
sleep 900

These are my “best practices”. Take them or leave them. :slight_smile:

Some pointers on the scripting for your example,

  1. (1) When you run a command in the background using ampersand (&), the status was always zero so I don’t think it is doing what you were wanting to do. I don’t have a suggestion right now and don’t want to spend the time to look up the answer/approach.

  2. (2) When running commands, you should use the syntax ./your-script.sh and add #/bin/sh (as I did, so all of your scripts should start with that string). Your commands will fail to start if they are not executable and “on the $PATH”. Using ./your-script.sh makes it work regardless of how your path is set.

  3. (3) I (as my personal best practice) always use double quotes around my parameters in an if statement, this is not technically necessary, but over the years I’ve found it helps when I’m dealing with strings that might be blank (a syntax error occurs otherwise), and it doesn’t hurt to use them “all the time”.