Why nobody does not make it in the docker? (All-in-one container/“black box”)

Original question on stackowerflow

Hello all!

I need a lot of various web applications and microservices.

Also, I need to do easy backup/restore and move it between servers/cloud providers.

I started to study Docker for this. And I’m embarrassed when I see advice like this: “create first container for your application, create second container for your database and link these together”.

But why I need to do separate container for database? If I understand correctly, the main message is the docker the: “allow to run and move applications with all these dependencies in isolated environment”. That is, as I understand, it is appropriate to place in the container application and all its dependencies (especially if it’s a small application with no require to have external database).

How I see the best-way for use Docker in my case:

  1. Take a baseimage (eg phusion/baseimage)

  2. Build my own image based on this (with nginx, database and
    application code).

  3. Expose port for interaction with my application.

  4. Create data-volume based on this image on the target server (for
    store application data, database, uploads etc) or restore
    data-volume from prevous backup.

  5. Run this container and have fun.


  • Easy to backup/restore/move application around all. (Move data-volume only and simply start it on the new server/environment).
  • Application is the “black box”, with no headache external dependencies.
  • If I need to store data in external databases or use data form this - nothing prevents me for doing it (but usually it is never necessary). And I prefer to use the API of other blackboxes instead direct access to their databases.
  • Much isolation and security than in the case of a single database for all containers.


  • Greater consumption of RAM and disk space.
  • A little bit hard to scale. (If I need several instances of app for response on thousand requests per second - I can move database in separate container and link several app instances on it. But it need in very rare cases)

Why I not found recommendations for use of this approach? What’s wrong with it? What’s the pitfalls I have not seen?

The approach you are suggesting is a valid one, and there are certainly some people that do that.

The perceived overhead of running multiple containers is greatly reduced once you introduce the docker-compose tool. The docker-compose.yml can be considered the black box you are looking for. Any application can be started with docker-compose up -d, and all containers will come up in tandem.

Having separate containers allows you to create a separation of concerns between the different services in each of your applications. That way, if I need to change something about the database, I’m not also changing something about my web application server in the process.

You’ve already pointed out what in my opinion is the biggest issue, and that is the added difficulty of adding redundancy. I may very well need to add additional capacity at some point. Adding more application server instances can make this possible. If those application server instances come with a database that they are hardcoded to, this is more difficult, and things must be rearchitected.

Much of what is considered best practice in the docker community comes from concepts laid out in the Twelve Factor App: http://12factor.net/. Since a container is a way to run a process, each container is in charge of one process. Multiple instances of that type of process will result in multiple instances of that container. Different types of processes will necessitate multiple types of containers, or different images.

At the end of the day, I would suggest you build a test application and try both ways of deploying it. Which one suits your specific use case better? Will you never ever need scaling? Does combining multiple services into one container actually simplify the deployment for your specific requirements?