Docker Community Forums

Share and learn in the Docker community.

ENTRYPOINT running a script - arg $1 is NOT received

(Goffinf) #1

I am trying to run a script as my ENTRYPOINT where the location of that script is defined by environment variables and receives an argument. The script runs but the arg is empty ??

I do NOT want to define the script arg as an environment variable since it contains sensitive data.

Is there any way for me to be able to define the script location in my Dockerfile using environment variables AND be able to pass in one or more arguments into my script via CMD or explicitly with docker run … image arg ??

Here’s a series of examples that hopefully describe the problem. Any help appreciated :

First lets set a baseline to be sure that everything works in the simplest case:

A simple Dockerfile:

FROM ubuntu




COPY ./scripts ./



ENTRYPOINT ["/bin/echo", “Hello”]
CMD [“World”]

Build the image (note: its NOT running a script yet):

$ docker build -t entrypointtest .

Running this container produces the expected output:

$ docker run -it entrypointtest
Hello World

OK, so lets change the Dockerfile to ask it to run a script by changing the ENTRYPOINT and CMD like so:

ENTRYPOINT ["/usr/local/bin/nexus/"]
CMD [“Fraser”]

The script file looks like this:

echo “Hello $1”

if [ -z “$1” ]; then
echo “You MUST provide arg value to this script”;
echo "Hello $1"

rebuild, re-run and everthing works fine also:

$ docker run -it entrypointtest
Hello Fraser
Hello Fraser

I can successfully send a diffferent script arg:

$ docker run -it entrypointtest George
Hello George
Hello George

… and even use a different entrypoint and cmd arg ( is exactly the same as except it echos ‘Goodbye’ instead of ‘Hello’)

$ docker run -it --entrypoint /usr/local/bin/nexus/ entrypointtest George
Goodbye George
Goodbye George

If we look at the COMMAND output for the stopped containers we can see what actually ran in each case:

"/usr/local/bin/nexus/ Fraser"
"/usr/local/bin/nexus/ George"
"/usr/local/bin/nexus/ George"

However, what a REALLY want to do is provide the location of the script I want to run via a couple of environment variables. So I updated the Dockerfile like this:

CMD [“Fraser”]

Note: I am explictly specifying /bin/bash -c otherwise variable expansion wont happen and you’ll end up with this error when you do docker run:

docker: Error response from daemon: Container command not found or does not exist…

I can use the shell style (see below) to remedy this but it still doesn’t change the outcome.

Anyway, … rebuild, re-run … what happens now is the script does get executed, BUT the arg in CMD is not passed to the script ??

$ docker run -it entrypointtest

You MUST provide arg value to this script

Same outcome if I explicitly provide an arg:

docker run -it entrypointtest George

If we look at the COMMAND for these two invocations they seem ok (they don’t show the resolved env variable values, but we know the script did execute so variable expansion must be working):


Obviously if I explicitly provide an --entrypoint and a cmd arg that will work (because the script location is now once again a hard-coded path):

$ docker run -it --entrypoint /usr/local/bin/nexus/ entrypointtest George
Hello George
Hello George

… but thats not what I want, I want to be able to do something like:

docker run -it -e entrypointtest foo

Using the shell style for the Dockerfile ENTRYPOINT/CMD yields the same problem (no arg value passed to the script):

CMD Fraser

Is there any way for me to be able to define the script location using environment variables AND be able to pass in one or more arguments into my script via CMD or explicitly with docker run … image arg ??



(David Maze) #2

I wouldn’t try to do it in the Dockerfile directly. If I wrote a script

echo "Hello $1"

it seems like it will do what you want.

I’m not totally clear what it is you want, though; if it was me I’d just skip the environment variable and pass the command I want to run as the first argument on the command line

echo "Hello $2"
exec "$@"

(In terms of Docker style, I’ve generally found that docker run --rm -it imagename bash is so incredibly useful, and you tend to need to pass so many arguments to docker run anyways, that the process-as-entrypoint style is more of a hassle in practice. If you look at the standard Docker database images, they expect to be passed the process name in CMD even though that’s usually always the name of the database daemon.)

Also, if you’ve wound up with a unified build system that always installs a collection of programs and want some sort of “separate image for each”, building a layer per application is cheap:

FROM ubuntu:16.04
ADD app.tar.gz /usr/local
FROM baseimage
CMD /usr/local/bin/foo
FROM baseimage
CMD /usr/local/bin/bar

What about the script itself? Is there a specific need to pass the script to run as an environment variable, vs. something else?

Consistent with your previous examples, this runs

/bin/bash -c '$HOME/$SCRIPT' Fraser

Oddly, reading bash(1), it looks like the arguments after -c are the command(s) to run, then the positional parameters, starting with $0, so it’s possible that this will work

ENTRYPOINT ["/bin/bash", "-c", "$HOME/$SCRIPT", "$HOME/$SCRIPT"]

(I’m not clear why that’s better than

docker run -it entrypointtest foo


(Goffinf) #3

Thx David. I had seen the exec … $@ in various posts but didn’t really understand it (my background is Windows), however now I do you have provided me with some better options by using a ‘wrapper’ script which then runs the actual script requested.

On balance I think I prefer your first suggestion:


and here is why (which might clear up a bit of the confusion) …

The image I am creating is to provide a CLI experience for the user (like using a build tool - although in this case it performs configuration on an already running application). The pattern that I see suggested most often to to use ENTRYPOINT and optionally a CMD as well so that when running the container its much like you would if you were just executing the tool from a typical command-line.

The primary script that I want to execute is baked inside the image which I’ve exposed using:


I did so using environment variables so that it was possible to over-ride both the base folder location and the name of the script that is actually run, although 99% of the time this shouldn’t be necessary (its more a convenience that makes it easier to test individual scripts). The container actually has 10 or more scripts which are called from the main one ($ENTRYPOINT_TEST_DEFAULT_SCRIPT).

I have also exposed a VOLUME (/scripts) so that it is possible to run scripts that are actually outside the container, but again, for the most part that shouldn’t be necessary nor is it what I really want folks to do (I prefer immutable server so the image provides everything the container needs at run-time - but it can also be a useful convenience when testing nontheless).

The main script currently has one mandatory argument. This is actually a password that is used by one of the scripts that is called, and thats why I didn’t want it exposed as an environment variable. If later on we have a service API I can call to obtain the secret I might change this, but right now we don’t, so pushing it in at launch time seems like the best I can do.

So, AFAICT, whilst using CMD on its own could work, the user of the container would need to know too much about the location and names of the scripts inside the image/container and that degrades its ease-of-use, so I’d rather hide/abstract that.

ENTRYPOINT allows me to hide the location/name of the main script and using an associated CMD allows the user to only be concerned with providing the one mandaory argument (password) … there are others which are optional but lets leave that there for the purpose of this thread.

Using environment variables for the location and name of the script to run, allows the user to easily over-ride either/both of those should they need to.

Mounting a VOLUME and over-riding the ENTRYPOINT and optionally the CMD as well are still available too.

So … for me it really comes down to choosing one of these options for the wrapper script:

Option 1

# Just pass one or more ARGs to the script in the FIXED location (can still over-ride the ENV vars if required)

Option 2

# Pass the script NAME and any ARGs to the script in the base folder inside the container

Option 3

# Pass the full path location to the script and any args
exec "$@"

I’ve settled on Option1 with following Dockerfile (-h is the option to display Help text):


ENTRYPOINT ["/usr/local/bin/nexus/"]
CMD ["-h"]

So I can run the container with any of these forms:

docker run -it nexus3admin -p password
docker run -it -e nexus3admin -p password
… a bunch of other variants

Or even …

docker run -it -v $(pwd):/scripts --entrypoint /scripts/ entrypointtest -p password

Thx again for pointing me in the right direction