You can successfully install and run docker on a Debian Live ISO with 1 problem.
The Live system uses an overlay filesystem to perform writes and a read only root filesystem of the ISO where dockerâs /var/lib/docker dir and images are stored.
Meaning that docker images (layers) which are preloaded into the Live ISO are always copied to this overlay fs first, before the containers can be started. (Which takes time).
Possible solutions:
Using persistence and pointing docker to that drive/USB via data-root option for writes.
(Tested and working)
Desired solution:
Get docker to use the images, which are stored on the read only fs, without copying everything to the overlay fs and instead only perform new writes of the containers there, similar to qemu/kvm snapshots.
If someone can actually come up with a solution for this Iâd be willing to pay, because this has been driving me insane lmao.
Can you help us to understand why you want to run Docker Desktop on a Live CD? I seriously canât see any use case for it. If you are just playing with Docker Desktop figuring out what it can and what it canât do, thatâs great, but unless you can come up a good reason that Docker Inc wants to support, I doubt that it will ever work.
I can imagine some very special cases, like a broken OS and using a LiveCD to run a Linux and use the old Docker VM (if it is possible) to backup data.
A few colleagues and I maintain a bunch of servers with different applications/containers. Most of these servers run in live mode with tmpfs/overlayfs. We update the ISOs daily and and want the docker images to be included with those builds for ease of access.
Iâve already figured out how to import docker images in the build without using docker load, since thatâs not possible in chroot, which is how Debian live-build creates the ISOs.
I simply copy the entire /var/lib/docker structure to the ISO during the build.
This works perfectly and âdocker imagesâ returns all the images which were included.
The only problem is that the root file system, where /var/lib/docker is located, will always be read only on a live ISO. All writes are automatically copied to the live systemâs overlay fs.
So the instant I start a container, I can see all the image layers being copied to the overlay fs first, before it runs. Depending on the image size this can take a few minutes.
Thats why Iâm trying to figure out if its possible for docker to use the image in read-only mode, while only writing data that is currently being written, not the entire image layer structure. Similar to a snapshot on qemu virtual machines.
First of all shame on me⊠Somehow I constantly read âDocker Desktopâ where nobody wrote about that⊠Please, if you notice I am writing something stupid, donât hesitate to let me know
Thank you for the description, however, it still doesnât explain why you need this so I am not sure it would catch the developers attention, but at least I think I finally understand better what you want to achieve so I will share what I know.
Since Docker strongly depends on the Kernel and the backing filesystem (filesystem on which the docker data root is installed) at least some issues are not unexpected.
This is exactly how Docker works. This is the point of images. Those are never changed, the iimage layers are only used as read-only layers under the container filesystem. The problem is that the docker data root contains everything, not just images. If it is not writable, then the container filesystems are not writable either, but it doesnât matter, because even just to create a container, you need to write the Docker data root.
I havenât used any Live ISO recently but I thought the OS woiuld be in the memory where you can install softwares. So either I am wrong or it is a special case, but I donât think that Docker itself is responsible fo that.
This is exactly how Docker works. This is the point of images. Those are never changed, the iimage layers are only used as read-only layers under the container filesystem. The problem is that the docker data root contains everything, not just images. If it is not writable, then the container filesystems are not writable either, but it doesnât matter, because even just to create a container, you need to write the Docker data root. I havenât used any Live ISO recently but I thought the OS woiuld be in the memory where you can install softwares. So either I am wrong or it is a special case, but I donât think that Docker itself is responsible fo that.
Hey man, yes youâre right, there are basically two file systems. One is read only and contains the root tree of the iso (squashfs) and the other holds all the writes that are performed (overlay). When something is written to a file or directory on the read only file system, the entire thing is copied to the overlay. Depending on the size this can take time.
I have figured out whatâs being copied to overlay when the container is started. Parts of /var/lib/docker/vfs and /var/lib/docker/image.
As I understand it, vfs holds the layers of an image and image all the metadata.
The images have about 13 layers combined, but apparently only a few need to be writable once the container is started, since only half are copied to /var/lib/docker/vfs on the overlay.
I am trying to understand why that is and how I could achieve 0 layers needing to be writable at start up of the container.
Like you explained, I thought that the image and all of its layers could be read only, while only new container writes would have to be physically written. But this is not what I am experiencing in this scenario.
It seems that the upper layers of the image need to be writable for some reason, in order for the container to start.
Thank you for your time btw, I understand that this completely unsupported.
Now that piece of information was that I missed. As I mentioned, Docker, and in this case Dockerâs performance depends on the backing filesystem, since different backing filesystems support different storage drivers. Overlay2 is the default storage driver recently which works as I described, but that canât be used if the backing filesystem doesnât support it. It seems Docker on your Live environment uses vfs which is not a âcopy-on-writeâ filesystem and intended to be used for testing.
Quote:
The vfs storage driver is intended for testing purposes, and for situations where no copy-on-write filesystem can be used. Performance of this storage driver is poor, and is not generally recommended for production use.
From the documentation of the vfs driver
The VFS storage driver is not a union filesystem; instead, each layer is a directory on disk, and there is no copy-on-write support. To create a new layer, a âdeep copyâ is done of the previous layer. This leads to lower performance and more space used on disk than other storage drivers.
Since a read-only Live CD needs a special filesystem, Docker chose vfs for storage driver because that works on every backing filesystem.
You could assume that Docker decided not to support some backing filesystems, but if you read about overlay filesystem in general, you will find this:
A wide range of filesystems supported by Linux can be the lower filesystem, but not all filesystems that are mountable by Linux have the features needed for OverlayFS to work.
I have tested everything and here are my findings:
whenever /etc/docker/daemon.json is changed, or more specifically, includes options like âdata-dirâ, âexec-dirâ, or âstorage-driverâ, docker re-writes the entire /var/lib/docker dir (or new main dir if data-root is used).
installing docker on a live ISO results in docker automatically choosing vfs as the default storage driver. The earliest time when this can be changed is with a boot hook, which is executed during the livecdâs boot, stops the daemon, adds {âstorage-driverâ: " devicemapper"} to /etc/docker/daemon.json and starts the daemon again.
after the daemon reloads with the new storage-driver option, docker automatically rewrites the entire /var/lib/docker dir for devicemapper. The problem is that this process does not include the image folders, namely devicemapper and image, which were manually included during the build. âdocker imagesâ returns nothing unfortunately.
I believe the reason for this is that docker recognizes a chroot environment during the build/installation and automatically switches to a vfs storage driver. Since the only way and time to change this is after boot, when the daemon has already started once, it is impossible to prevent docker from rewriting the entire /var/lib/docker dir and in the process lose the previously added images / folders.
The only solution would be the userâs ability to choose the storage driver from the very beginning, before or during dockerâs package installation.
Isnât there a way to preconfigure the daemon.json in your chroot?
Furthermore, the storage driver will affect how images and container data is stored. If the storage driver is changed, existing images and containers become inaccessible, even though the files still exist. If the storage driver is reverted, the images and containers become accessible again.
You will need to run the docker engine with devicemapper so that images are pulled are written in a way devicemapper can work with. You also want to add a /etc/daemon.json to preconfigure the docker engine, so the docker engine starts with the correct storage driver right away.
Yeah, live-build has an âincludes.chrootâ folder where files can be added during the chroot stage. I tried to add the daemon.json with the devicemapper driver there, but the docker daemon does not run in chroot. So the earliest time when this changed daemon.json would be registered is at boot, when the daemon starts. But the install process still chooses vfs before that.
I solved the devicemapper format issue by first changing the storage driver to devicemapper on a regular install with docker, then loading the images and copying the entire /var/lib/docker directory written for devicemapper with all the image folders, now called âdevicemapperâ and âimageâ. This worked before with vfs too, so Iâm hoping nothing changed there.
Yes, I have to figure out a way to start or even install docker with storage-driver devicemapper. I will go on a deep dive and hopefully find something in the docs or maybe github.
Yeah, the modified daemon.json is added right after docker is installed during the chroot stage. So technically when the daemon first starts it reads the daemon.json with devicemapper. But the vfs folder is still present after the daemon starts (empty but present), which tells me that right when docker installs, which is when the initial /var/lib/docker directory is written, it chooses vfs automatically.
Do you know how that works for other storage drivers? What would happen if you installed docker on a regular Ubuntu install. Will it choose overlay2 by default then? Somebody else suggested I have to find the precise line of code that decides which storage driver is used during installation, change it and recompile from source.
Iâm gonna prepare for a couple of all nighters because I am absolutely determined to figure this out lol
Which I assume will be ext4 or xfs for many users, which will result in overlay2 being used.
You should be able to see the order the storage drivers are tested and which one finally is used in the system journal.
I managed to have devicemapper set as the default storage driver at the time of installation.
For anyone else interested, instead of installing via apt, you need to use the .deb files from the docker page.
For some reason this fixes the daemon issue and it truly uses the driver specified in the daemon json. Without any remaining vfs dirs.
the problem now is that devicemapper creates a sparse file data (virtual size 100GB) and metadata (2GB)
Building and adding this /var/lib/docker dir to the squashfs works, but as soon as you boot it, docker immediately tries to copy that data file to the overlay and the daemon fails to start. You can see the entire space being filled up until itâs full, then its deleted and the whole process starts anew.
Thanks for all the help guys, I will do my best to find a solution for thissee
Is there a way to shrink the 100GB data file to its actual size for the loop-lvm variant of devicemapper (not direct-lvm) , similar to the qemu-img convert command with qcow2 images?
Yeah I tried, but the daemon wonât start with storage-driver fuse-overlayfs.
Apparently, the only option docker offers for squashfs/overlay filesystemâs on Ubuntu/Debian is either vfs or devicemapper, because those are the only drivers the daemon starts with.
There must be a way to shrink that data sparse, files Iâll go on a deep dive to find smth.
Iâll give an update if I make some progress on this.
after a few tries, i would like to hear your opinion about the disadvantages \ adventages of my approach:
Iâve managed to use overlay2 above a sparse file formatted as ext4.
Added the default option to use storage-device: âoverlay2â to /etc/docker/daemon.json on chroot stage of the livecd build.
Mount /var/lib/docker to the sparse file by updating also /etc/fstab in chroot stage
â it can be allocated to your desired needs without defining a persistent partition for the livecd.
in case of updating the sparse file, all you need to do is just rebuild the livecd in your environment.
I think that because the sparse file cannot re-size, my containers are limited for storing data⊠but for that case Iâm satisfied with local mounts.