Single versus multiple containers


I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:

  1. Container 1 contains P1, Container 2 contains P2. The 2 containers will be linked. The communication between P1 and P2 will be across container boundary.
  2. One single Container will contain P1 and P2. The communication will remain within the container.

Kindly guide me on the merits and demerits of the above 2 approaches. What is the overhead involved in terms of communication latency in approach 1?


It is a best practice to run only one process per container:

As for the latency in approach 1: given that they will run on the same physical machine, latency will be extremely small

Adding to this: the communication latency of the two approaches will be almost identical (no matter how you set up the containers, the two processes will be communicating via on-box TCP), but the one-process-per-container setup will be much easier to build, since Docker can natively run one process per container without you having to go out of your way to install some sort of init system into the container.