Run command right after startup container

Hi people!

please forgive me my beginners question, I’ve made my homework and did some googling, but still have some challenges.

I’m trying to config some Dockerfile for Zeppelin, which is listening port 8080 and run CURL API right Zeppelin is up and running.
Unfortunately, if I put RUN or CMD command in the end of docker file, like this:

EXPOSE 8080
ENTRYPOINT [ "/usr/bin/tini", "--" ]
WORKDIR ${Z_HOME}
CMD ["bin/zeppelin.sh"]
RUN MyScript.sh

I get an error:
curl: (7) Failed to connect to localhost port 8080: Connection refused

But I can successfully RUN the script manually (ssh to the docker and run it there).
$ docker exec -itdocker ps -aq /bin/bash
$ ./MyScript.sh
{"status":"OK","message":"","body":"2F9RFC74N"}

Any Idea will be highly appreciated!

Well, is the container really up and running ? And also your “docker run …” cmd needs to have some “-p 8080:8080” in there as well …
BUT first I’d bring your commands in the right order:

...
EXPOSE 8080
WORKDIR ${Z_HOME}
# RUN bin/zeppelin.sh
ENTRYPOINT [ "/usr/bin/tini", "--" ]

While ENTRYPOINT should always be you last statement inside your Dockerfile, you would never use CMD and ENTRYPOINT at the same time. -> https://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/

I assume “bin/zeppelin.sh” is the script to launch the app … no ? If so, leave it out and let /usr/bin/tini do the job !

I further assume MyScript.sh is sorely for “testing” the functionality of Zeppelin. You should also leave that out of the Dockerfile. Your app gets started with ENTRYPOINT. So executing the script beforehand will show you nothing

Why wouldn’t you? You just have to be aware that the value of CMD will become a parameter to the command or script used as ENTRYPOINT.

If /usr/bin/tini is ment to be used like shown above, then this is a perfectly used example for this.

ok, I dug a bit deeper on the CMD vs. ENDPOINT issue -> https://phoenixnap.com/kb/docker-cmd-vs-entrypoint

Wouldn’t it make more sense to share everything(!) relevant with us, so we can be the judge what’s relevant to solve your problem.

Since you neither share your complete Dockerfile, nor share your entrypoint scripts, it is impossible to know what causes your problem.

The error message is from inside the container or from the host? Please be more specific with details and provide everything necessary to understand your situation: the complete Dockerfile and your entrypoint script(s), along with the location (host/container?) where the curl problem happens. If it’s on the host, please post your docker run line as well.

Thanks for the replies:

  1. run command:
    docker run -d --network=host --rm -v /opt/:/opt/ -v /etc/hadoop:/etc/hadoop -v /etc/alternatives:/etc/alternatives -v /etc/hive:/etc/hive -v /etc/spark:/etc/spark zeppelin
  2. ./MyScript.sh does a rest call to zeppelin, this is web server which I run with:
    ENTRYPOINT [ "/usr/bin/tini", "--" ,"bin/zeppelin.sh"]
  3. after your explanations (thank you), I understood why (I call it while server is not running and run just after I run docker)

As workaround I keep this script outside of docker and run it after container is up and running

ok, I don’t know zepplin and I’m getting more and more confused on what you try to achieve …

Is this a “custom” build of zepplin ?
The official zepplin site has a whole different docker setup -> https://zeppelin.apache.org/download.html

Are you certain you need these bind-mounts ?? -v /opt/:/opt/ -v /etc/hadoop:/etc/hadoop -v /etc/alternatives:/etc/alternatives -v /etc/hive:/etc/hive -v /etc/spark:/etc/spark

Especially the “/opt” and “/etc/alternatives” ring some alarm bells. Because here you access links and binaries outside you container … which is against any docker philosophy, a security threat and will most likely lead to an inconsistent/instable app as you mixing binaries/libraries from two different OSs.

If this is really a custom build, make sure all necessary binaries and their dependencies are within the container.
Or use the official images :wink:

Good advice, but not pertinent to the original question of why the script can’t access external network (presumably through port 8080) when Dockerfile is run, but can from inside Docker shell after initiated.

Well I’ve seen an “effect” where you explicitely specify the exposed ports inside your Dockerfile (not docker-compose) but the “docker run …” seem to completely ignore these settings.
So I’d suggest you put them in you cmd:
docker run ... -p 8080:8080 ...
That did the trick for me …