Migrate Docker Containers

Hello, I want to migrate my docker containers from my Raspberry Pi 4 (ARM64) to Intel NUC (X86_64).
Can I just mount my docker data from my old system to the new one? I plan to just mount my /var/lib/docker and and start docker service. They are both using the same docker version. Thanks before.

what makes you think that binaries for different architectures would be interchangeble?

In a real world scenario, people would only need to copy:
– the docker-compose.yml file per stack
– if bind-mounts are used for volumes: their content
– if local volumes are used for volumes: their content

Then restore everything on the new host, adjust the paths for the volumes if necessary and restart your stacks

Okay, so image is not interchangeable. But the volume is right? So if I modify the new docker-compose.yml to use the right image for the architecture, and mount the same volume then my data should be fine am I right?
I’m using named volumes for all my containers. So I can just:

  1. Move the old /var/lib/docker/volumes to the new system.
  2. Make a new docker-compose.yml that match the old system except the image will be for X86_64 system.
  3. Do a docker-compose pull of the new image and docker-compose up.

Can I be just fine? Does mysql database data will be compatible with each other? Thanks for the reply.

Given that the volumes only contain application payload and config files: it should succeed.
Any binaries will not work.

I am afraid moving /var/lib/docker/volumes won’t be working, as the metadata registring the volumes in the docker engine won’t be moved along with the volume and the new docker engine won’t be aware of those volumes.

If I were in your situation, I would “backup” the volumes content using an os container, mount the named volume into a folder, mount a local path into another folder of the container and use tar to archive the content of the volume. Then use the same approach on the new host to restore the archive back into a the target of a named volume.

Using this approach I copied etcd3 and consul data from arm64 to x86_64 and both are working flawless since. I would be surprised if application data would be stored differently on different architectures… but then again: there is no guaranty that is not.

Okay now I got it. So the concept is I have to first run the container with empty named volume, then stop the container and move the old data into the empty volume. Is it right?
So for example I have a volume named mysql-data that contains data for my database, I have to:

  1. Create a container using docker compose, name the container “mysql” and it has a named volume “data”.
  2. Do a docker-compose up
  3. Do a docker cp my old data in /var/lib/docker/volumes/mysql-data/_data into the new mysql container where that named volume is mounted.
  4. Restart the container.

This should be fine then right?

I know that config files should be the same between architectures. But what worried me is my database. I have no clue at all when dealing with databases :slight_smile:

I would bootstrap a naked container to backup the content of the volume folder to a bind-mount using tar and do the same on the new host. I would use tar in any case, since it preserves permissions and ownernship.

Probably just taring the contant of _data and restoring it on the new host will be sufficient.

edit: since you copy the data, there is no loss in trying, is there? :slight_smile:

Okay, I have an image on what to do now. So it would be better to use tar instead of docker cp command?

Yeah, I was hoping it will be a simple as mounting the old data and be done with it, but you mentioned docker checks metadata registers and it does make sense. I’ll have a lot to do then :slight_smile:

Actually my NUC is not arrived yet, I asked here first because I expect to not getting a reply this fast. Thanks @meyay!

Welcome!

Also, It is always good to know how to actualy backup your persisted data, isn’t it?

You will want to have a solution that allows to keep a could of backups of each volume for at least a couple of days.

Once all containers are stopped, something like this should be sufficient to backup all volumes:

for volume in $(docker volume ls -q); do
   docker run -ti --rm -v $volume:/source -v ./backups:/backups alpine bash -c 'cd /source; tar czvf /backups/$volume.tar.gz'
 done

And this to create them again on the new host:

 for backup in $(ls -1 backups); do
    docker create volume ${backup%%.tar.gz}
    docker run -ti --rm -v ${backup%%.tar.gz}:/target -v ./backups:/backups alpine bash -c 'cd /target; tar xzvf /backups/$backup'
 done
2 Likes

Yeah, I’m actually using btrfs with a lot of snapshots to backup my data :slight_smile:
Thanks for the scripts! I’ll back it up now and restore it when my new system arrive. Thanks again @meyay :heart:

I wrote the scripts from the top of my head, they are merly a demonstration on how the process looks like. Though, they should work like this :slight_smile:

1 Like

Hi I need to copy volume directly into s3 in place of tar because tar looks taking a lot of time for me as my volume is around 80 GB do you think some solution

Its pretty much the same approach, except you would need to install a volume plugin (like this one: AWS S3 as Docker volumes - DEV Community, haven’t tested it myself though) to acess s3, create a volume using the storage driver backed by your target s3 bucket. Make sure to provide the required permissions in the IAM instance profile for your ec2 instance (or however the plugin needs it).

Then replace the bind mount -v ./backups:/backups in aboves command with -v name-of-your-s3-volume:/backups.

Thank you.

Do you think it a good idea to copy all data from a volume location in host to another host directly (after stop the container)

Did this is a good approach?

For example (backup) :sudo docker container stop postgres; cd /var/lib/docker/volumes/postgresdocker_postgres-data; sudo aws s3 sync _data s3://XXX/dbdata/_data;

and restore in another host like
sudo docker container stop postgres; sudo aws s3 sync s3://XXX /var/lib/docker/volumes/postgresdocker_postgres-data; sudo docker container start postgres;

In my testing, it works at a good speed but worried about any other issue that could come in the future.

To be safe, I would create the named volume in the new host first, then copy the data from the AWS S3 to the new host. Here’s the step by step:

  1. In the old host, copy your data to S3 like you did above.
  2. In the new host, do docker volume create volumename
  3. Copy the data from S3 to the new volume created.

Why? Because you are using named volume, not bind mount. Although the path will be the same (/var/lib/docker/volumes/volumename), but named volume is meant to be handled by docker.

Thanks @budimanjojo I do the same way. :slight_smile:

1 Like