So I have a machine running several Docker containers (I use Portainer to manage it if useful). Every container has various volumes attached and run on various networks.
Is there a simple and fast way to port the entire configuration to another host?
i can’t find a single all-in-one solution for this task
You can try to stop the docker daemon, create a tar archive from the docker root-data directory on your old host and untar the archive on the new host. Of course everything done as root user.
Though, it depends if your current system and your new system will use the same storage driver, otherwise the new storage driver will prevent docker from using the restored data.
Personally, I would backup the content of volumes instead and restore the backups on the new host, then clone the git repos that store my compose files, then re-create the conatiners from the compose files.
I agree with @meyay and I would also add if you move an existing docker data root to another host, there is also a risk to have wrong network bridges / Docker IP address if that machine is in a different network or if that machine already has networks creatd by something else.
I think moving the volumes attached to every container should do the job. Is there a guide to do that, considering every container I use has at least two volumes attached to it?
Instead of --volumes-from grafana-new make sure to mount the volume instead. I am not sure if bind volume (-v /host/path:/container/path) are included when using --volumes-from.
Furthermore, make sure the volume is not used when you perform your backup to be sure to create consistent backups
I used this approach to back up volumes in the past:
# check `docker volume ls´ for volume names
source_volume=name_of_your_source_volume
backup_date=$(date +"%Y%m%d")
docker run --tty --rm --interactive --volume ${source_volume}:/source --volume ${PWD}/archive:/backup alpine tar czvf /backup/${source_volume}_${backup_date}.tar.gz -C /source .
And restored them like this:
target_volume=name_of_your_target_volume
restore_archive=${volume}_${backup_date}.tar.gz # of course the real name without placeholders
docker run --tty --rm --interactive --volume ${target_volume}:/target --volume ${PWD}/archive:/backup alpine tar xzvf /backup/${restore_archive} -C /target
tar: can't open '/backup/grafana-storage_20230825.tar.gz': No such file or directory
The code I’m using
target_volume=grafana-storage
restore_archive=grafana-storage_20230825.tar.gz
docker run --tty --rm --interactive --volume ${target_volume}:/target --volume ${PWD}/archive:/backup alpine tar xzvf /backup/${restore_archive} -C /target
I’ve correctly created the grafana-storage volume before launching the command and I’m launching the command inside after having cd inside the folder containing grafana-storage_20230825.tar.gz
I just happened to have the files in archive subfolder. You did right by moving the file into the expected path. Though you could have removed the archive folder from the -v binding as well.
Note: the path is supposed to be relative to the current folder. An absolute path would require a change in the -v binding as well.