Docker Community Forums

Share and learn in the Docker community.

NOOB! Single Docker Image with Multiple servers/apps


I’m a noob to Docker but a reasonably proficient (albeit amature) PYTHON programmer.

I have played with sample images from Hub but I was wondering if and how I can get an Image with Python AND MySQL for example. All in the am Image. I believe it’s good practice to have a Python image and a MySQL image running as different docker images and network the two? but I’m curious to do it in the same container.

sorry for the 101 question


If you need to run more than one service within a container, you can accomplish this in a few different ways.

Put all of your commands in a wrapper script, complete with testing and debugging information. Run the wrapper script as your CMD. This is a very naive example. First, the wrapper script:


Start the first process

./my_first_process -D
if [ $status -ne 0 ]; then
echo “Failed to start my_first_process: $status”
exit $status

Start the second process

./my_second_process -D
if [ $status -ne 0 ]; then
echo “Failed to start my_second_process: $status”
exit $status

Naive check runs checks once a minute to see if either of the processes exited.

This illustrates part of the heavy lifting you need to do if you want to run

more than one service in a container. The container exits with an error

if it detects that either of the processes has exited.

Otherwise it loops forever, waking up every 60 seconds

while sleep 60; do
ps aux |grep my_first_process |grep -q -v grep
PROCESS_1_STATUS=? ps aux |grep my_second_process |grep -q -v grep PROCESS_2_STATUS=?

If the greps above find anything, they exit with 0 status

If they are not both 0, then something is wrong

if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo “One of the processes has already exited.”
exit 1
Next, the Dockerfile:

FROM ubuntu:latest
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ./
If you have one main process that needs to start first and stay running but you temporarily need to run some other processes (perhaps to interact with the main process) then you can use bash’s job control to facilitate that. First, the wrapper script:


turn on bash’s job control

set -m

Start the primary process and put it in the background

./my_main_process &

Start the helper process


the my_helper_process might need to know how to wait on the

primary process to start before it does its work and returns

now we bring the primary process back into the foreground

and leave it there

fg %1
FROM ubuntu:latest
COPY my_main_process my_main_process
COPY my_helper_process my_helper_process
CMD ./
Use a process manager like supervisord. This is a moderately heavy-weight approach that requires you to package supervisord and its configuration in your image (or base your image on one that includes supervisord), along with the different applications it manages. Then you start supervisord, which manages your processes for you. Here is an example Dockerfile using this approach, that assumes the pre-written supervisord.conf, my_first_process, and my_second_process files all exist in the same directory as your Dockerfile.

FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]

You can run multiple executables in a single container but it’s not advisable. Docker containers look like a VM, but they’re not. In essence, it’s a wrapper around an executable and its libraries. Running multiple executables in a container doesn’t make sense in that context and it’s unlikely to be any more efficient.

My Docker for Web Developers book and video course will help you. You don’t need any prior knowledge of Docker and it describes how to set up robust development environments which you can adapt for your own stack. Use the discount code dock30 for 30% off.