Migrating docker and containers to _NOT_ dedicated BTRFS filesystem

Hi there,

I’m trying to migrate my Seafile-docker container (and the whole docker) under BTRFS. I learned the hard way that docker doesn’t support AUFS under BTRFS, and I perfectly understand that. What I don’t understand is why docker requires a dedicated BTRFS block storage device to work (mentioned multiple times in BTRFS storage driver documentation).

Now I saved all my container images (with docker save) and I moved docker workspace under /mnt/raid/docker/var-lib-docker following this guide. As specified in the guide, I’ve added the /etc/systemd/system/docker.service.d/docker.conf drop-in replacement file for systemd, and it seems to work properly. In fact, now I have this mountpoint instead of the AUFS one (plus my BTRFS Raid 1).

/dev/sdb1    on  /mnt/raid                              type  btrfs            (rw,relatime,compress=zlib,space_cache,subvolid=5,subvol=/)
/dev/sdb1    on  /mnt/raid/docker/var-lib-docker/btrfs  type  btrfs            (rw,relatime,compress=zlib,space_cache,subvolid=5,subvol=/docker/var-lib-docker/btrfs)

This configuration looks fine to me, and has the advantage that both Seafile user data and docker filesystems will share the same (compressed) space (so I don’t have to make two partitions and figure out which one will grow more quickly, then repartition, and so on).

Is there something I’m missing about BTRFS, docker and subvolumes? Will I regret this setup when my Seafile installation will be close to 2Tb and it will take days to make any change?

Thanks for any suggestions.


I’ll try to answer to my own question a month later. Everything seems to be working pretty fine: Seafile runs fine, it reboots correctly, whenever the power goes down, no meaningful data loss can be appreciated and so on.

I’ve also set up a cron job that first runs a hourly snapshot of the whole /mnt/raid BTRFS filesystem and then does something like:

btrfs send -p parent_snap latest_snap | ssh user@remote_backup_host sudo btrfs receive remote_backup_dir

This also works pretty fine and identical snapshots seem to consume very little space.

#> sudo btrfs fi sh
Label: 'btrfs-raid'  uuid: 6466c0be-3b3c-4d43-943a-c6257f1857c5
Total devices 2 FS bytes used 15.95GiB
devid    1 size 1.82TiB used 18.01GiB path /dev/sdb1
devid    2 size 1.82TiB used 18.01GiB path /dev/sdc1

#> btrfs filesystem df /mnt/raid
Data, RAID1: total=16.00GiB, used=15.18GiB
System, RAID1: total=8.00MiB, used=16.00KiB
Metadata, RAID1: total=2.00GiB, used=788.19MiB
GlobalReserve, single: total=272.00MiB, used=0.00B

Making a diff between one of the first captured logfiles and a recent one, it seems that about 300 identical snapshots taken in August use up about 400 megs of metadata, but take this measurement with a grain of salt.

Overall, this approach looks quite efficient and I can say I’m proud of it, hope I won’t be proved wrong in the future, when the usage of Seafile will start to increase seriously.

If you can “mount” it and the packages are available, I am working on another plugin that allows for anything that supports the mount command so long as the packages are available in something like CentOS.

The concept of the plugin is to use a CentOS image preloaded with epel libraries. On startup it will call yum install -y <package names>

After that it will do the necessary mounting by invoking the mount command for a given type and options. Unlike Docker which uses the syscall.Mount version I am planning to use the actual mount command so if it works on the command line you can expect it to work in the plugin as well.

Although tbh I don’t have a need for anything like that for myself since my only need was for CIFS and Gluster but adding that capability to my project Another CIFS and GlusterFS plugin