Best practice to run a bash setup script after creation of container

I would like to run a bash script to set some values but I am unsure how best to do this. I guess I could add the script to run to my bashrc file but that seems wrong.

I am very new to docker and am trying to get my head around it, I am used to running Linux and am struggling to get my head around having separate containers running different services and how they talk to each other:-)

I need to set up a way of emailing logs and errors when running python code.

I can build a container with Ubuntu and mailutls postfix etc but need to specify the sender email so I can filter it, I have done this using the postfix generic file which the script I want to run at startup sets using $HOSTNAME to catch the root@$HOSTNAME and set a email of my choosing.

Thing is what is the best way to run this script and keep the container running so I can run docker exec and still bash into the container?

The other thing that I am wondering is as I start the container on my desktop and control it, can I not just point my container to use the smtp/Postfix setup on my machine through some port trickery? I would still have a setup script problem but I wonder if this is a better idea.

Thanks

To answer this question directly, the best way to do that is to write an ENTRYPOINT script in your Dockerfile that does whatever setup is needed, and then ends with exec "$@" to run the normal CMD (or whatever got passed on the command line). The official mysql image entrypoint script is a little bit complex, but it has the right basic pattern.

You might find it easier to use logstash with something like a “gelf” input and an “email” output. (There’s a prebuilt logstash image.) Then you can run your application container with a Docker logging option that points at that container’s published port. Depending on how big your application is, using the combined Elasticsearch/Logstash/Kibana (“ELK”) stack might be overkill, or not.

To a first approximation, no. (And explicitly specifying an SMTP relay host and authentication credentials as configuration is probably less fragile and much less machine-dependent in any case.)

1 Like

I did try this but it just keeps exiting as soon as it runs, It was kinda late last night so I will have another look.

Thanks

docker containers exit as the last command they execute stops.
you could try input or sleep 1000d :slight_smile:

Later should replace it by the postfix command.

When I don’t have the ENTRYPOINT command to my bash script and I run the container.

docker exec -it container bash

I can get into the container. and run my script all is good.

When I do have the ENTRYPOINT run my script the container exits so I cant then exec into the container anymore.

You are suggesting I should put a sleep command in there to keep it running, that would seem odd to me but I am only new to this:-)

Should there not be a predefined directory where scripts are run at startup. I have found some sites offering such base-images but was not sure why Docker does not do this as standard or am I missing the point.

Thanks

the default entrypoint is /bin/bash…which will start a REPL and never exits…when you replace it with your script, then you need to care that it never exits…so why you do not start a bash at the end of your ENTRYPOINT script? then it behaves as before and just sets some environments?

OK, I will try that, thanks for help!