[Best Practices] Multiple App in containers?

Hi Dockers!

I have installed Docker 1 month ago and played with it.
I finished to read all the documentation and references today (btw good job, the documentation is very well done!)

I have an “architecture” question to ask :
I want to run one container per “task/service”, for example one container for web (apache) one for ftp (vsftp)…
This is pretty straight forward but some “service” need multiple app, for example mail : I need to use postfix + davmail + spammassasin + roundcube.

On this specific example I would like to know how you would architecture Docker to achieve my goal?
Multiple process in one container (even if I know that we should minimize the container at one app)?
One app per container (here one for postfix, one for davmail …) ? But it would be more complicated to manage and makes my app interact together…

Many thanks if you have some experience on this type of things!

belette

A little up to see if someone can help me :slight_smile:

I think you don’t need to have them in one container. Just use link. Make postfix and davmail accept connections on the docker network, create a data container for postfix and davmail to share messages, and configure roundcube to use the smtp and imap server (container). I don’t know much about spammassasin, but possibly it can be acheived similarly.

Thanks for your reply.
For making containers talk to each other I am using openvswitch for the network so I think I can ride of link method.
Do you think it is better to use a data container rather than a share directory ?
I was thinking about doing a -v option and put all the data on my host.
At the beginning I was thinking about data container but I would need to do a -v option in the data container to share data between my host so I don’t see the added value doing that rather than doing a -v option in both containers ?

In my opinion the only important issue with the shared directory is permission issue. Your file permissions may be messed up. Besides that I think it’s OK.

My question under this is I don’t see how to manage data containers when you need to have more than 10GB of files (for example my mails have much more), so I see perfectly the added value of data containers it is very well in architecture point of view but how to increase the size of it without using doing shared directory with the host ?)

Many thanks again :slight_smile:

Thanks Hong Xu, and by the way your are right : I am dealing with permission issue when using -v option in containers , and I don"t know why? It seems you had some experience on that…
When changing the permission of a file inside a shared folder between my container & my host it it mess up the owner on my host!

Many thanks for your help

belette

Currently I’m sharing a volume between containers - not using link

Example:

  • Host OS create user rsm-data with UID 2000
  • Create /var/log/rsm (owner is rsm-data)
  • Dockerfile creates a user rsm-data also UID 2000
  • Run container with -v /var/log/rsm:/var/log/rsm
  • Container application process uses the same rsm-user (UID 2000)

So it’s a volume from host to container with matching UIDs

Next I create a container for my logstash process it’s also going to mount the same volume.

So there’s now two containers both running non-root processes.

Prior to this method I was trying to link containers but that only worked well with root user process inside the container.

I want to use non-root container processes for my application code - thus the UID dance listed above.

It’s a couple extra steps and potentially non-portable but I can get my Chef configuration management policies to do the setup work, it’s not that much extra code.

Ideally I’d like to just use the link option for containers, but that’s not working out for me so far with non-root container user processes.

I may have overlooked something as I’m also new to Docker, but I’m not turning back now. What I have works for me OK, I’m sure down the road things will change again.

A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile . It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

The container’s main process is responsible for managing all processes that it starts. In some cases, the main process isn’t well-designed, and doesn’t handle “reaping” (stopping) child processes gracefully when the container exits. If your process falls into this category, you can use the --init option when you run the container. The --init flag inserts a tiny init-process into the container as the main process, and handles reaping of all processes when the container exits. Handling such processes this way is superior to using a full-fledged init process such as sysvinit , upstart , or systemd to handle process lifecycle within your container.

If you need to run more than one service within a container, you can accomplish this in a few different ways.

  • Put all of your commands in a wrapper script, complete with testing and debugging information. Run the wrapper script as your CMD . This is a very naive example. First, the wrapper script:
#!/bin/bash

# Start the first process
./my_first_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_first_process: $status"
  exit $status
fi

# Start the second process
./my_second_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_second_process: $status"
  exit $status
fi

# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds

while sleep 60; do
  ps aux |grep my_first_process |grep -q -v grep
  PROCESS_1_STATUS=$?
  ps aux |grep my_second_process |grep -q -v grep
  PROCESS_2_STATUS=$?
  # If the greps above find anything, they exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit 1
  fi
done

Next, the Dockerfile:

FROM ubuntu:latest
COPY my_first_process my_first_process
COPY my_second_process my_second_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
  • Use a process manager like supervisord . This is a moderately heavy-weight approach that requires you to package supervisord and its configuration in your image (or base your image on one that includes supervisord ), along with the different applications it manages. Then you start supervisord , which manages your processes for you. Here is an example Dockerfile using this approach, that assumes the pre-written supervisord.conf , my_first_process , and my_second_process files all exist in the same directory as your Dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]