How to update the host machine that runs my docker containers / images?

Hi there,
I am fairly new to docker and this might be the most obvious thing for Linux veterans out there but I was not able to find a quick answer to this.

I have just started to migrate single raspberry pi installations dotted around our house into a single Ubuntu Server based installation with docker, running on one, central i3-10 based platform.

While I think I understand on how to update individual docker images (in my case e.g. mosquitto, zigbee2mqtt, ebusd, raspberrymatic, pihole, node-red, etc.), I am not clear on how to maintain my host machine? Primarily, if I attempt to update my Ubuntu installation (and e.g. docker version as part of that), do I pre-emptively have to shut down all my individual docker containers before I do that, or can this be done “on the fly”, with a linux update running in parallel to running containers?!

If this is documented somewhere, could you kindly point me that direction (or if not, share your best-practice approach for a similar environment)?

MANY thanks!
Robert

It depends on many things. If you think of a usual desktop Linux like Fedora or Ubuntu, you don’t want your machine running all the time, you you can run the updates, except you might want to hold your Docker packages like

apt-mark hold docker-ce docker-ce-cli docker-ce-rootless-extras

This is almost always important and maybe I forgot to add some packages to the example. If you let the package manager update Docker anytime, it can change some configuration you don’t want to change under your containers. For example the storage driver which would make your containers and volumes seemingly disappear.

I say it is “almost always” important because when you just play with the docker commands, you don’t care if the you have a newer version.

You can enable “live restore” so you could update even your docker daemon without stopping your containers, altough it is not compatible with Docker swarm if I am not mistaken. My experience is when you run the container with --rm to remove the container when it stops, that flag also not compatible with live-restore so that container will disappear. This is why I was not sure if the containers were really running during daemon downtime. I could have tested it by completely stopping the daemon and not just restarting.

If you have a Docker Swarm cluster or Kubernetes cluster, you can make it higly available with load balanceers and running multiple instances of a service. Then you can update your hosts one by one. Of course, still don’t let the package manager update Docker!

There are some special Linux distributions which are really small and has also special update strategy. CoreOS was one of them but I haven’t used it long. I remember I could update the system and roll back if something went wrong, but it restarted automatically (obviously I didn’t understand CoreOS enough).

There was “Atomic Host” from Red Hat too, but I just tried that with OpenShift.

I didn’t follow closely back then, but if I am right, CoreOS and Project Atomic was merged and become Red Hat CoreOS.

There was RancherOS, which was actually a container on top of the kernel where you had a docker daemon for the system and another for your applications. Now there is Rancher OS 2 which I dont really know.

Now there is also PhotonOS, but I don’t know much that either :slight_smile:

So as you can see, the update depends on your distribution and your goals.If high availability is important to you, run a cluster with multiple machines.