Running script inside of the container, at container startup

I have several running containers configured, and they are all running as intended, with one exception.

I need to start an application inside of the container, at container runtime.

If I start the container manually, removing the internal script execution at startup, I can then login to the container, and execute this script manually, without any issues.

But, when I attempt to run this same internal script, during startup, it always fails, and the container shuts down.

What is the proper way to execute an internal script, so that the application comes up at initial runtime?

Here is my Docker run command:

docker run -it
–mount type=bind,source=/Docker/GFTEST,target=/GLASSFISH
–hostname GFTEST --name=GFTEST
-c “/usr/local/bin/gftest”

The /usr/local/bin/gftest executes correctly if executed from within the container.

If executed during startup, it executes, but does not stay up and running.

It tries to start, but exits after restarting.

I thought about it being a permissions issue, where I need to source in the container root users .profile or .bashrc, but I have not actually tried that yet.

I know that this sometimes happens when running scripts via CRON, because the root .bashrc and .profile are not sourced through CRON, unless you specifically source in those hidden files inside of the script.

Could it be that simple?

I don’t want to make those changes to the startup script yet, without posting the question here first.

Maybe there’s a better way for me to invoke the internal script.

If there is a better way, please let me know here via this thread.

Thanks in advance, and have a very Happy New Year !!!

Happy New Year To Have A Nice Shared
LukQQ Situs Ceme Online

I tried sourcing in the users .bash_profile, and the .bashrc, but the application still does not stay up after executing the script, when executing externally.

It still works correctly, if I login to the container, and execute the script manually, from inside of the container.

Any suggestions would be greatly appreciated.

All you are doing here is starting the container then running a script. Once the script completes the container will shut down unless within the script there is a process that continues to run. You should also take a look at rhe ENTRYPOINT and COMMAND instructions. As a simple experiment, what happens if you put sleep 1000000; as the last line in your script ?

A container is designed to run a single process and when that process dies the container dies with it. When you start a container with a script as the argument, this overrides the CMD that was in the Dockerfile so that original process is never run. When the script exits so does the container. This is expected behavior. If the script needs the process running, then it will never work using it like this because the original process is being replaced by the script.

I see two options here:

(1) If you want the script to run as the container starts then you should take whatever command you are using to normally start the container and use it has the last command in your script. Then make your script the CMD that starts the container.

For example, if the last line in your Dockerfile is:

CMD ["/bin/sh" "-c" "asadmin start-domain --verbose"]

Then change that to be:

CMD ["/bin/sh" "-c" "/usr/local/bin/gftest"]

and add this to the end of /usr/local/bin/gftest

asadmin start-domain --verbose

This will run your script and then run whatever command you want to continue running in the container.

(2) You could also start the existing container as you normally would and then use:

docker exec -t GFTEST /bin/bash -c "/usr/local/bin/gftest"

to run the script in the already started container. This will work if the script can run while the main process is also running. Since the script works when you start the container first and run it second, I’m guessing that this second approach is what will work for you. But without knowing more what what the script does or why it fails it’s hard to say.