Unable to start container process: exec: "docker-entrypoint.sh": executable file not found in $PATH: unknown

I had this container running, but by a mistake I deleted (some) of its zfs datasets, and had to delete it entirely, together with the remaining datasets.

I am now trying to re-create the container with docker compose, but I get this error and the container is never deployed:

[+] Running 3/3
:heavy_check_mark: Network jellystat Created 0.0s
:heavy_check_mark: Container jellystat.kv-db Started 0.2s
:heavy_check_mark: Container jellystat.kv Created 0.2s
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: “docker-entrypoint.sh”: executable file not found in $PATH: unknown

That is strange. Even if I use different container names, the problem persists. I don’t have any issue when creating other containers with docker-compose.

Share Dockerfile and docker-compose.yml.

There’s no Dockerfile and docker-compose also doesn’t matter because even the basic one produces this error.

services:
jellystat:
container_name: jellystat
image: cyfershepard/jellystat:latest

Well, the Docker Hub page of the image has a link to support via Github repo.

But this is a docker issue, not image issue.

The example docker-compose.yml works for me.

I guess that’s because you haven’t previously deleted outside of docker the zfs dataset of this container.

Not sure how deleting only data can mess with the application. The example only mounts a backup folder, which should not be required.

You use more bind mounts which might be missing files?

What exactlyy did you delete? You can’t just delete zfs data sets manually. It is possible that you broke your Docker data filesystem and you will have to reinstall Docker. If it can be fixed without that, I don’t know how. Maybe if you delete the image from which you try to create the container. Also run docker system prune, restart the docker daemon, but if you still have the issue, I guess you will need to reinstall Docker.

I am aware. I as running bulk delete, and instead of just snapshots, it also deleted some datasets.
Everything else is working fine, except of jellystat. I am not even sure these two things are related, because I recreated the datasets, then deleted the container (which also deletes the datasets properly) and that worked for any other affected container.
I also did the steps you suggested (except for removing docker completely) but still no luck.

If you deleted the docker image and it didn’t help, I have only one idea. When you delete an image, it just deletes the tag first if there is anything else referring to the layer. Even though you ran docker system prune, that also removes only dangling images, but if the filesystem is damaged, maybe those layers can’t be deleted but when you pull the same image again the same damaged layer is used because of the same hashes. This is all just speculation. I never experienced this myself and solving something like this would require inspecting images, following references, hashes, identifying the damaged filesystem layer and trying to restore the missing file or the layer itself. I’m not sure if I could solve it in person, but it is even harder to guide you through it. Usually docker image inspect can show you the GraphDriver data of the image to find the merged layers, but I don’t remember how it looks like with the ZFS storage driver.

You can try tools like dive, but I don’t know if it supports ZFS or not and even if it does, how it would help you, but maybe it can help you get closer the some answers.

The documentation also mentiones:

However, at this point in time it is not recommended to use the zfs Docker storage driver for production use unless you have substantial experience with ZFS on Linux.

If you have docker compose files and all persistent data is in bind mounted folders, it would be easy to reinstall Docker and start everything again. If you use named volumes, or any data in containers, you need to back up everything first.