How to ensure that my docker image / container will still work 20 years from now?

I have built a docker image where it pulls a specific PHP, MYSQL and Apache version from Docker hub and then runs CMD to download and install packages.

But what if … after 20 years those images / package will no longer be hosted on Docker hub? What if I want to still be able to run my container then? Can I somehow freeze my containers in time? Can I bundle it all together so that it will work even 100 years from now? How?

Like with ever other repository: to guarante the available of a repo item (in this case container images), you will need your own private registry to cache images or host self created images.

Run your own private registry and make sure to maintain it the next 20 to 100 years :slight_smile: like
Gitlab (has build in container registry), Nexus3 (has build in container registry), Frog Container Registry, Harbor

@meyay Do I have to host my own images or containers?

Suprising qustion :slight_smile: Though, I get where it commes from. In your private registy you need to host the images! In container terms the repo is a specific image repo of a registry (the “image” name without the tag)

The docker-neutral term of “docker image” is “container image” and the repos (the non docker term) are called container or image registries.

@meyay Oh, I see. So if I have in my Dockerfile a CMD like this npm install which downloads and installs all packages from the npm repository. Then if I build my image then when I create and run a container then this command npm install will not have to run again?

If you build a container image using your Dockerfile, you will create an image with a point in time snapshot of all os level packages and dependencies, your custom app, your entrypoint script and whatever you do inside your Dockerfile :slight_smile:

You will need to tag your docker builds and push it to your private registry to persist the image.

@meyay Ah, that clarifies it a lot! If I may ask one more question (not related to this). Why do we need containers if we have images? Can’t we create new instances directly from images?

Edit:
Is my understanding correct?
Image > Container > Instance of Container (3 things)

or

Image > Instance of Container (2 things)

Containers are the run time instances of images.

If you are comming from a developer background, this analogy might be helpfull: an image is like a class, while a container is like an instance of that class.

In your understanding, there is a layer that does not exist.
image -> container (runtime instance of image).

@meyay Thank you it is getting clearer. But I have also seen that containers that are not running (stopped). What are they? What do you call a stopped container? Is a stopped container like an image or a file? If a container is a runtime instance of image then what / where is a stopped container? This one thing is still confusing to me…

Edit:
Is it like this?

Image > Stopped container? > Container

exactl that: a stopped container. It is still the runtime instance, but in a stopped state.

@meyay

A runtime instance in a stopped state

Uhm… what does that mean? Like if a container is like a program that is running. Then the program = image. And the process = container. Right? But if you close the program it is no longer anywhere to be found. But if you stop a container… what is that?

Your analogy doesn’t cover it.

A container is just an isolated process on the host’s kernel and a bunch of configuration options to run it and a copy-on-write layer to store modified/added files.

A stopped container still represents those configuration options and the cow layer.

May I suggest this fabulous free self-paced docker training: https://container.training/intro-selfpaced.yml.html. Don’t let yourself be demotivated by the high number of slides… many slides can be processed in a couple of seconds, a few might need minutes though :slight_smile:

Thank you @meyay for taking the time to help me. I will definitely look at that docker training. I think I already understand it better.

Make sure to treat a container as an ephemeral/disposable instance of an image.

Thus, make sure to write persitent data in volumes (VOLUME declartion in Dockerfile and -v in docker run) to store them outside the container. Otherwise the data will be written in the cow layer, which will disappear as soon as you remove the container - which will eventually happen, as updating a container to use a newer image will require to delete the old container!

Welcome!