I checked the source code of moby and I see that this error message is only shown when the containerd image store is used. Which is by default enabled in the new version
containerd image store is now the default for fresh installs .
Although it says “fresh installs”, after upgrading, I got the new snapshotter as well. I am not sure if zfs should work with the new image store, but I disabled the containerd snapshotter in the docker daemon config
"features": {
"containerd-snapshotter": false
}
Then I got a different error message, which is understandable as I have no zfs installed on my test machine.
level=info msg="[graphdriver] trying configured driver: zfs"
level=debug msg="zfs command is not available: exec: \"zfs\": executable file not found in $>
This is the source code by the way if someone is interested
But if I intrpret it correctly, the storage driver is added to the chcked list only when the containerd snapshotter is used.
There has just been an update in ubuntu noble repo, where “containerd-snapshotter” is again false by default. I’ve transitioned to it so I had to reactivate it in daemon.json;
I upgraded docker-ce to 5:29.0.1-1~ubuntu.24.04~noble yesterday.
After upgrading, it starts, without any changes to ‘/etc/docker/daemon.json’.
And I have not encountered any problems for half a day.
Now docker-ce is upgraded to 5:29.0.2-1~ubuntu.24.04~noble this morning.
I hope nothing goes wrong with this version also.
but I have no zfs errors , it just does not start if the data is on zfs , if I remove daemon.json its works fine creating fresh /var/lib/docker install folder but thats on my boot drive / btrfs
tried with these settings removing graph for the snapshotter
{
“data-root”: “/ZFS3WAY24B/docker”,
“storage-driver”: “zfs”,
“features”:{
“containerd-snapshotter” : true
}
#“graph”: “/ZFS3WAY24B/docker”
}
systemctl status docker.service
× docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─override.conf
Active: failed (Result: exit-code) since Sun 2025-11-30 20:52:32 AEST; 45s ago
Duration: 19.555s
Invocation: d084d1437de94bd1929d2aac78b47f3f
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 1715092 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 1715092 (code=exited, status=1/FAILURE)
Nov 30 20:52:32 aio systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Nov 30 20:52:32 aio systemd[1]: docker.service: Start request repeated too quickly.
Nov 30 20:52:32 aio systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 30 20:52:32 aio systemd[1]: Failed to start docker.service - Docker Application Container Engine.
journalctl -xeu docker.service
░░ The job identifier is 12415415 and the job result is failed.
Nov 30 20:52:32 aio systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ Automatic restarting of the unit docker.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Nov 30 20:52:32 aio systemd[1]: docker.service: Start request repeated too quickly.
Nov 30 20:52:32 aio systemd[1]: docker.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit docker.service has entered the 'failed' state with result 'exit-code'.
Nov 30 20:52:32 aio systemd[1]: Failed to start docker.service - Docker Application Container Engine.
░░ Subject: A start job for unit docker.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit docker.service has finished with a failure.
░░
░░ The job identifier is 12416145 and the job result is failed.
Is it possible to get this working on v29.1.2? getting the following on start-up:
INFO[2025-12-10T00:00:39.909691465+13:00] Starting up
INFO[2025-12-10T00:00:39.910140054+13:00] OTEL tracing is not configured, using no-op tracer provider
INFO[2025-12-10T00:00:39.910201893+13:00] CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory dir=/etc/cdi
INFO[2025-12-10T00:00:39.910214393+13:00] CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory dir=/var/run/cdi
INFO[2025-12-10T00:00:39.919508674+13:00] Creating a containerd client address=/run/containerd/containerd.sock timeout=1m0s
INFO[2025-12-10T00:00:39.924593297+13:00] Loading containers: start.
INFO[2025-12-10T00:00:39.924640166+13:00] [graphdriver] trying configured driver: zfs
failed to start daemon: error initializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?): zfs
When adding the following to /etc/docker/daemon.json:
“features”: {
“containerd-snapshotter”: false
}
I just get a different error:
INFO[2025-12-10T00:01:47.867783275+13:00] Starting up
INFO[2025-12-10T00:01:47.868320874+13:00] OTEL tracing is not configured, using no-op tracer provider
INFO[2025-12-10T00:01:47.868402787+13:00] CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory dir=/etc/cdi
INFO[2025-12-10T00:01:47.868412927+13:00] CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory dir=/var/run/cdi
WARN[2025-12-10T00:01:47.876770206+13:00] “containerd-snapshotter” is now the default and no longer needed to be set
INFO[2025-12-10T00:01:47.877299268+13:00] Creating a containerd client address=/run/containerd/containerd.sock timeout=1m0s
INFO[2025-12-10T00:01:47.882555691+13:00] Loading containers: start.
INFO[2025-12-10T00:01:47.882592654+13:00] Starting daemon with containerd snapshotter integration enabled
WARN[2025-12-10T00:01:47.884579145+13:00] Preferred snapshotter not available in containerd message=“lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin”
failed to start daemon: configured driver “zfs” not available: unavailable
Running into issues even when I downgrade to v28 - do I need a clean reinstall at this point?