Docker on a LiveCD

Hello again, everyone!:sweat_smile:

You can successfully install and run docker on a Debian Live ISO with 1 problem.

The Live system uses an overlay filesystem to perform writes and a read only root filesystem of the ISO where docker’s /var/lib/docker dir and images are stored.

Meaning that docker images (layers) which are preloaded into the Live ISO are always copied to this overlay fs first, before the containers can be started. (Which takes time).

Possible solutions:

  • Using persistence and pointing docker to that drive/USB via data-root option for writes.
    (Tested and working)

Desired solution:

  • Get docker to use the images, which are stored on the read only fs, without copying everything to the overlay fs and instead only perform new writes of the containers there, similar to qemu/kvm snapshots.

If someone can actually come up with a solution for this I’d be willing to pay, because this has been driving me insane lmao.

Can you help us to understand why you want to run Docker Desktop on a Live CD? I seriously can’t see any use case for it. If you are just playing with Docker Desktop figuring out what it can and what it can’t do, that’s great, but unless you can come up a good reason that Docker Inc wants to support, I doubt that it will ever work.

I can imagine some very special cases, like a broken OS and using a LiveCD to run a Linux and use the old Docker VM (if it is possible) to backup data.

Hello, thanks for checking in.

A few colleagues and I maintain a bunch of servers with different applications/containers. Most of these servers run in live mode with tmpfs/overlayfs. We update the ISOs daily and and want the docker images to be included with those builds for ease of access.

I’ve already figured out how to import docker images in the build without using docker load, since that’s not possible in chroot, which is how Debian live-build creates the ISOs.

I simply copy the entire /var/lib/docker structure to the ISO during the build.

This works perfectly and “docker images” returns all the images which were included.

The only problem is that the root file system, where /var/lib/docker is located, will always be read only on a live ISO. All writes are automatically copied to the live system’s overlay fs.

So the instant I start a container, I can see all the image layers being copied to the overlay fs first, before it runs. Depending on the image size this can take a few minutes.

Thats why I’m trying to figure out if its possible for docker to use the image in read-only mode, while only writing data that is currently being written, not the entire image layer structure. Similar to a snapshot on qemu virtual machines.

First of all shame on me
 Somehow I constantly read “Docker Desktop” where nobody wrote about that
 Please, if you notice I am writing something stupid, don’t hesitate to let me know :slight_smile:

Thank you for the description, however, it still doesn’t explain why you need this so I am not sure it would catch the developers attention, but at least I think I finally understand better what you want to achieve so I will share what I know.

Since Docker strongly depends on the Kernel and the backing filesystem (filesystem on which the docker data root is installed) at least some issues are not unexpected.

This is exactly how Docker works. This is the point of images. Those are never changed, the iimage layers are only used as read-only layers under the container filesystem. The problem is that the docker data root contains everything, not just images. If it is not writable, then the container filesystems are not writable either, but it doesn’t matter, because even just to create a container, you need to write the Docker data root.

I haven’t used any Live ISO recently but I thought the OS woiuld be in the memory where you can install softwares. So either I am wrong or it is a special case, but I don’t think that Docker itself is responsible fo that.

1 Like

This is exactly how Docker works. This is the point of images. Those are never changed, the iimage layers are only used as read-only layers under the container filesystem. The problem is that the docker data root contains everything, not just images. If it is not writable, then the container filesystems are not writable either, but it doesn’t matter, because even just to create a container, you need to write the Docker data root. I haven’t used any Live ISO recently but I thought the OS woiuld be in the memory where you can install softwares. So either I am wrong or it is a special case, but I don’t think that Docker itself is responsible fo that.

Hey man, yes you’re right, there are basically two file systems. One is read only and contains the root tree of the iso (squashfs) and the other holds all the writes that are performed (overlay). When something is written to a file or directory on the read only file system, the entire thing is copied to the overlay. Depending on the size this can take time.

I have figured out what’s being copied to overlay when the container is started. Parts of /var/lib/docker/vfs and /var/lib/docker/image.
As I understand it, vfs holds the layers of an image and image all the metadata.

The images have about 13 layers combined, but apparently only a few need to be writable once the container is started, since only half are copied to /var/lib/docker/vfs on the overlay.

I am trying to understand why that is and how I could achieve 0 layers needing to be writable at start up of the container.

Like you explained, I thought that the image and all of its layers could be read only, while only new container writes would have to be physically written. But this is not what I am experiencing in this scenario.

It seems that the upper layers of the image need to be writable for some reason, in order for the container to start.

Thank you for your time btw, I understand that this completely unsupported.

Now that piece of information was that I missed. As I mentioned, Docker, and in this case Docker’s performance depends on the backing filesystem, since different backing filesystems support different storage drivers. Overlay2 is the default storage driver recently which works as I described, but that can’t be used if the backing filesystem doesn’t support it. It seems Docker on your Live environment uses vfs which is not a “copy-on-write” filesystem and intended to be used for testing.

Quote:

The vfs storage driver is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor, and is not generally recommended for production use.

From the documentation of the vfs driver

The VFS storage driver is not a union filesystem; instead, each layer is a directory on disk, and there is no copy-on-write support. To create a new layer, a “deep copy” is done of the previous layer. This leads to lower performance and more space used on disk than other storage drivers.

Since a read-only Live CD needs a special filesystem, Docker chose vfs for storage driver because that works on every backing filesystem.

You could assume that Docker decided not to support some backing filesystems, but if you read about overlay filesystem in general, you will find this:

https://docs.kernel.org/filesystems/overlayfs.html

Quote:

A wide range of filesystems supported by Linux can be the lower filesystem, but not all filesystems that are mountable by Linux have the features needed for OverlayFS to work.

1 Like

Thank you so much for this information!

I have read everything there is on storage drivers and experimented a bit.

Docker cannot use overlay2, btrfs or zfs on a live system with lower dir squash / upper dir overlay.

The only storage driver I could use while having the docker daemon load successfully was devicemapper.

Copy on write is supported with this driver but the docs say it is deprecated and will be removed soon .

I will try to run the container with this driver and report back.

Thanks again, man!

Are you sure devicemapper supports cow? If I remember right, It is allowing to use loopback devices as storage.

Docker can use fuse-overlayfs on kernels >= 4.18, though I couldn’t find any information regarding the backing filesystems it supports.

Update: I just checked it devicemapper, it does support cow.

1 Like

Hey Metin, thanks for checking in! :grinning:

Yeah, thanks again for the info.

I have tested everything and here are my findings:

  • whenever /etc/docker/daemon.json is changed, or more specifically, includes options like “data-dir”, “exec-dir”, or “storage-driver”, docker re-writes the entire /var/lib/docker dir (or new main dir if data-root is used).

  • installing docker on a live ISO results in docker automatically choosing vfs as the default storage driver. The earliest time when this can be changed is with a boot hook, which is executed during the livecd’s boot, stops the daemon, adds {“storage-driver”: " devicemapper"} to /etc/docker/daemon.json and starts the daemon again.

  • after the daemon reloads with the new storage-driver option, docker automatically rewrites the entire /var/lib/docker dir for devicemapper. The problem is that this process does not include the image folders, namely devicemapper and image, which were manually included during the build. “docker images” returns nothing unfortunately.

I believe the reason for this is that docker recognizes a chroot environment during the build/installation and automatically switches to a vfs storage driver. Since the only way and time to change this is after boot, when the daemon has already started once, it is impossible to prevent docker from rewriting the entire /var/lib/docker dir and in the process lose the previously added images / folders.

The only solution would be the user’s ability to choose the storage driver from the very beginning, before or during docker’s package installation.

Isn’t there a way to preconfigure the daemon.json in your chroot?

Furthermore, the storage driver will affect how images and container data is stored. If the storage driver is changed, existing images and containers become inaccessible, even though the files still exist. If the storage driver is reverted, the images and containers become accessible again.

You will need to run the docker engine with devicemapper so that images are pulled are written in a way devicemapper can work with. You also want to add a /etc/daemon.json to preconfigure the docker engine, so the docker engine starts with the correct storage driver right away.

1 Like

Hey man, thanks for the reply.

Yeah, live-build has an “includes.chroot” folder where files can be added during the chroot stage. I tried to add the daemon.json with the devicemapper driver there, but the docker daemon does not run in chroot. So the earliest time when this changed daemon.json would be registered is at boot, when the daemon starts. But the install process still chooses vfs before that.

I solved the devicemapper format issue by first changing the storage driver to devicemapper on a regular install with docker, then loading the images and copying the entire /var/lib/docker directory written for devicemapper with all the image folders, now called “devicemapper” and “image”. This worked before with vfs too, so I’m hoping nothing changed there.

Yes, I have to figure out a way to start or even install docker with storage-driver devicemapper. I will go on a deep dive and hopefully find something in the docs or maybe github.

Thanks again for the help, you guys are awesome! :blush:

Is there really no way to place /etc/daemon.json, before the docker service is started?

Yeah, the modified daemon.json is added right after docker is installed during the chroot stage. So technically when the daemon first starts it reads the daemon.json with devicemapper. But the vfs folder is still present after the daemon starts (empty but present), which tells me that right when docker installs, which is when the initial /var/lib/docker directory is written, it chooses vfs automatically.

Do you know how that works for other storage drivers? What would happen if you installed docker on a regular Ubuntu install. Will it choose overlay2 by default then? Somebody else suggested I have to find the precise line of code that decides which storage driver is used during installation, change it and recompile from source.

I’m gonna prepare for a couple of all nighters because I am absolutely determined to figure this out lol :smile:

It depends on the backing filesystem:
https://docs.docker.com/storage/storagedriver/select-storage-driver/#supported-backing-filesystems

Which I assume will be ext4 or xfs for many users, which will result in overlay2 being used.
You should be able to see the order the storage drivers are tested and which one finally is used in the system journal.

Hey Metin, good news and bad news.:sweat_smile:

  • I managed to have devicemapper set as the default storage driver at the time of installation.
    For anyone else interested, instead of installing via apt, you need to use the .deb files from the docker page.
    For some reason this fixes the daemon issue and it truly uses the driver specified in the daemon json. Without any remaining vfs dirs.

  • the problem now is that devicemapper creates a sparse file data (virtual size 100GB) and metadata (2GB)
    Building and adding this /var/lib/docker dir to the squashfs works, but as soon as you boot it, docker immediately tries to copy that data file to the overlay and the daemon fails to start. You can see the entire space being filled up until it’s full, then its deleted and the whole process starts anew.

Thanks for all the help guys, I will do my best to find a solution for thissee

Is there a way to shrink the 100GB data file to its actual size for the loop-lvm variant of devicemapper (not direct-lvm) , similar to the qemu-img convert command with qcow2 images?

No Idea.

Last time I used the devicemapper was in 2018, and it was by accident.

Though, might want to consider if you want to give fuse-overlayfs a chance. According docs it is supported with every backing filesystem.

1 Like

Yeah I tried, but the daemon won’t start with storage-driver fuse-overlayfs.
Apparently, the only option docker offers for squashfs/overlay filesystem’s on Ubuntu/Debian is either vfs or devicemapper, because those are the only drivers the daemon starts with.

There must be a way to shrink that data sparse, files I’ll go on a deep dive to find smth.

I’ll give an update if I make some progress on this.

1 Like