I need a lot of various web applications and microservices.
Also, I need to do easy backup/restore and move it between servers/cloud providers.
I started to study Docker for this. And I’m embarrassed when I see advice like this: “create first container for your application, create second container for your database and link these together”.
But why I need to do separate container for database? If I understand correctly, the main message is the docker the: “allow to run and move applications with all these dependencies in isolated environment”. That is, as I understand, it is appropriate to place in the container application and all its dependencies (especially if it’s a small application with no require to have external database).
How I see the best-way for use Docker in my case:
Take a baseimage (eg phusion/baseimage)
Build my own image based on this (with nginx, database and
Expose port for interaction with my application.
Create data-volume based on this image on the target server (for
store application data, database, uploads etc) or restore
data-volume from prevous backup.
Run this container and have fun.
- Easy to backup/restore/move application around all. (Move data-volume only and simply start it on the new server/environment).
- Application is the “black box”, with no headache external dependencies.
- If I need to store data in external databases or use data form this - nothing prevents me for doing it (but usually it is never necessary). And I prefer to use the API of other blackboxes instead direct access to their databases.
- Much isolation and security than in the case of a single database for all containers.
- Greater consumption of RAM and disk space.
- A little bit hard to scale. (If I need several instances of app for response on thousand requests per second - I can move database in separate container and link several app instances on it. But it need in very rare cases)
Why I not found recommendations for use of this approach? What’s wrong with it? What’s the pitfalls I have not seen?