Process orchestration with Docker

I was reading through the Dockerfile best practices here :

In particular

1: Don’t boot init
Containers model processes not machines. Even if you think that you need to do this you are probably wrong. Next…

I found this explanation kind of lacking and would love to hear it expanded.

At Discourse we are using containers to as “units of deployment” for our various customers. This means we are running init in our containers and it is working out pretty successfully for us.

This makes provisioning and monitoring way simpler as we don’t need to include complex process monitoring external to the containers. Rails apps are notoriously complicated involving 3 process per customer for us(unicorn / nginx and sidekiq job scheduler)

Further more unicorn is a “forking” application that takes advantage of unix copy on write. We need to monitor that children are not getting out of control.

I am curious as to the “written in stone” though shalt not init.

Are you suggesting we run a minimum of 3 containers tightly coupled and monitored external to the containers?

Update, there is an interesting container group proposal that covers some of this.