Docker Community Forums

Share and learn in the Docker community.

Why a docker-container is considered disposable?

docker

(4k1l) #1

i read in many books and documentations that a docker-container are considered disposable and have a short lifetime. why are they considered so ephemeral? In such case how can one run a containerized application in production ?

and what is the difference between the terms disposable-container and immutable-container ?


(Think) #2

disposable-container

you should optimize for the case of failure, cause this failure will happen anyway.
If restarting your container is easy, then you can trivial move containers across data centers and you trivial can migrate your software from version to version 2.

immutable-container

your application should not store anything an your container. Better use an external database for storage or explicit use a volume. Immutable containers are easy to scale and to migrate.


(David Maze) #3

Say you don’t have Docker; you’re directly running your application on some virtual machine setup, like Amazon EC2 instances. Also say your application gets enough load that you need to be running two copies of it, with a load balancer to route traffic to one or the other. For this to work, the application can’t actually keep any state in the local file system (one copy wouldn’t see the other’s local data; if you started a third copy it’d need this data too); it all has to be stored externally. If you have that setup working, and your load goes down, you can safely shut off one of the copies of the application without consequence: it is disposable.

How do you upgrade your VM-based application? You could use an automation tool (Ansible, Chef, Puppet, Salt Stack) to try to upgrade it in place. But it’s probably safer and more reliable to build a copy of your application VM, test it, and then start up new copies of the application alongside the old ones and turn the old ones off. This gives you a zero-downtime upgrade, and reuses the infrastructure you already have; you don’t have to separately test an upgrade path. This means the software in the VMs never changes: they are immutable.

These same practices transfer over to Docker containers. Store long-term persistent data outside the container (in an external-to-Docker database, in Docker volumes) and you can freely destroy containers. The Dockerfile system is much simpler than any of the automation tools I mentioned above (it’s almost a shell script, not some hybrid shell Python Ruby YAML thing). And the rule that you can never change container settings after creation simplifies differences between things that could never be meaningfully changed (environment variables), things that probably could (network settings), and things that need cooperation from the application (volume mounts, ability to pollute the host).

It is actually reasonable to expect containers to run in production for “a long time”, days or weeks even, but that’s different from them running “forever”. Plan ahead for your container to be deleted and know where its data comes from the next time it starts up.