Entry point script not executable - permission denied

Hello all,

I’m new to this forum and I hope this is the correct section to post this.

I’m attempting to build my first container. I have installed Docker Desktop on my laptop following these instructions.

I created a Dockerfile and I’m bulding it with docker build . \ -t jsa1987/minidlna-yamaha-avr:local.
The container builds successfully however, when I try to deploy it I get the following error:

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/app/bin/run-minidlna.sh": permission denied: unknown

Now I managed to run the container by overwriting the entrypoint in the compose file with entrypoint: ["tail", "-f", "/dev/null"]. If I then open a console in the container I find that the permission of the /app/bin/run-minidlna.sh script is 644.
From the console I can run chmod 777 run-minidlna.sh to make the script executable and I’m then able to run the script, start minidna and from there all works as expected.

Now the permission of the run-minidlna.sh file on my laptop where I build the container is 777, and I currently have the following in the Docker file:

COPY app/bin/run-minidlna.sh /app/bin/
WORKDIR /app/bin
RUN ["chmod", "+x", "/app/bin/run-minidlna.sh"]
ENTRYPOINT ["sh", "/app/bin/run-minidlna.sh"]

What do I need to do/change to have the run-minidlna.sh script executable when the container is built so that it can be deployed?

In case something is unclear, it’s a good habit to start in the official documentation:

Using the correct Dockerfile syntax, you can reduce your Dockerfile to this:

COPY --chmod=755 app/bin/run-minidlna.sh /app/bin/
WORKDIR /app/bin
ENTRYPOINT ["/app/bin/run-minidlna.sh"]

Note: you need to make sure the shebang in the first line of your shell script points to an existing absolute path of sh or bash or whatever shell you intend to use.

Thank you for your reply. I had already tried the chmod option with copy, and i just tried it again.

This is my current Dockerfile:

COPY --chown=1000:1000 --chmod=755 app/bin/run-minidlna.sh /app/bin/
WORKDIR /app/bin
ENTRYPOINT ["sh", "/app/bin/run-minidlna.sh"]

I re-built and tried to deploy the container again. I still get the same error:

Pull complete minidlna-server Pulled Container minidlna-server Creating Container minidlna-server Created Container minidlna-server Starting Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/app/bin/run-minidlna.sh": permission denied: unknown.

Overwriting the entrypoint in the compose file with entrypoint: ["tail", "-f", "/dev/null"] I can open a console and checking the /app/bin folder this is what I find:

root@minidlna:/app/bin# ls -al
total 16
drwxr-xr-x 1 root root 4096 Apr 13 16:32 .
drwxr-xr-x 1 root root 4096 Apr 13 16:31 ..
-rw-r--r-- 1 root root 4899 Apr 13 16:29 run-minidlna.sh

Neither the chmod nor the chown options used with COPY in the Dockerfile had any effect.

The script has a shebang in the first line #!/bin/bash.

Does anybody have any suggestions to fix this?

There is a reason why my example didn’t have "sh": it is not required if done right, and using it is generally an antipattern.

Did you check if /bin/bash actually exists inside the container? There is a reason why I wrote it like this:

Please share the compose file or docker run command you use to start a container based on the image.

Do you also mount a volum into /app/bin or /app in the container? That would explain why it doesn’t matter what permission you set as you always override it with the version on the volume.

Below is the compose file I’m using to start the container. The last line is there for the troubleshooting so that I can get the container started, and access it via a console.

version: "3"
    external: true
    image: jsa1987/minidlna-yamaha-avr
    container_name: minidlna-server

    hostname: minidlna.jnj.freeddns.org
      - MINIDLNA_DIR_A_1=/music/library1
      - MINIDLNA_FRIENDLY_NAME=minidlna-server
      - MINIDLNA_FORCE_SORT_CRITERIA=+upnp:class,-dc:date,+upnp:album,+upnp:originalTrackNumber,+dc:title
      - PUID=1000
      - PGID=1000
      - /mnt/plex/media/music:/music/library1
      - ./config/log:/log
      - ./config/db:/db
    restart: unless-stopped
    entrypoint: ["tail", "-f", "/dev/null"]

/bin/bash does exist in the container:

root@minidlna:/bin# ls -al bash
-rwxr-xr-x 1 root root 1265648 Apr 23  2023 bash

No, I’m not mounting any volumes at those paths…

So the path from the shebang is valid, the permissions are correct. I don’t see a reason for it to not work.

Is the image available on docker hub? So that we can try for ourselves?
Update: it is, but it seems to be an image from before chmod was added as parameter to the COPY instruction:

me@docker:~$ docker run --rm -ti --entrypoint bash jsa1987/minidlna-yamaha-avr
root@1bfaaa64ce80:/app/bin# ls -l
total 8
-rw-r--r-- 1 root root 4899 Apr 13 16:29 run-minidlna.sh

This is exactly my problem. Regardless if I add the chmod option to the copy instruction, or even if I run chmod as a separate instruction in the dockerfile, at the end the permission of the file in the build image is always -rw-r–r–.

Seems the changes related to chmod I’m making to the dockerfile are ignored when re-building the image.

We are at a point where it makes sense that you share a link to a github repo that allows to build the image ourselves.

Note: GitHub - JSa1987/miniDLNA-YamahaAVR-docker: Docker container wirh miniDLNA for old Yamaha AV Receivers does not include the entrypoint script, and the Dockerfile does not include your specific COPY instruction to copy the entrypoint script. In its current state the repo does not allow building the image.

Never mind. I created a short example to illustrate it:

# create entrypoint
cat <<EOF > entrypoint.sh
echo "it worked!"
sleep 1000

# change permission to something non executable
chmod 644 entrypoint.sh

# build the image
docker build . -f - -t entrypoint-test --no-cache --pull --force-rm   <<EOF
FROM debian:stable-slim

COPY --chmod=755 entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

Then I run a container based on the image:

me@docker:~/test/e$ docker run --rm -ti entrypoint-test
it worked!

Now here is the interesting part: it works like a charm on version 26.0.x , but does not on 20.10.x.

Checking it with ls -l confirmed that observation:

docker run -ti --rm --entrypoint bash entrypoint-test -c 'ls -l'

Can you share the output of docker info?

Update: My bad. I missed out on that version 20.10.x didn’t use buildkit as default builder. The --chown argument requires buildkit, which should be enabled by default on all newer docker versions.

Of course, it works with 20.10.x if I explicitly use buildkit:

DOCKER_BUILDKIT=1 docker build . -f - -t entrypoint-test --no-cache --pull --force-rm   <<EOF
FROM debian:stable-slim

COPY --chmod=755 entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

@meyay I have updated GitHub - JSa1987/miniDLNA-YamahaAVR-docker: Docker container with miniDLNA for old Yamaha AV Receivers with the current version of the files.

I will explain what I’m trying to achieve overall as this is the first container I have ever tried to build and maybe the way I’m approaching this is incorrect.
I want to build a container that runs minidlna, but with a slight change to the code in upnpsoap.c. So my idea was to download the whole code for minidlna via git, replaced upnpsoap.c with the modified version and re-compile everything. And then run minidlna with the /app/bin/run-minidlna.sh entry point script.
Overall this works, as if I login with a console in the container after it is built and manually change the permission for /app/bin/run-minidlna.sh then I can run it and everything works as expected. But please feel free to advise if there is a better way of doing this.

Below is the output of docker info on the laptop where I’m building the container. However, this is then deployed on another machine.

josto@josto-laptop-debian:~/Documents/GitHub/miniDLNA-YamahaAVR-docker$ docker info
Client: Docker Engine - Community
 Version:    26.0.1
 Context:    desktop-linux
 Debug Mode: false
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.13.1-desktop.1
    Path:     /usr/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.26.1-desktop.1
    Path:     /usr/lib/docker/cli-plugins/docker-compose
  debug: Get a shell into any image or container. (Docker Inc.)
    Version:  0.0.27
    Path:     /usr/lib/docker/cli-plugins/docker-debug
  dev: Docker Dev Environments (Docker Inc.)
    Version:  v0.1.2
    Path:     /usr/lib/docker/cli-plugins/docker-dev
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.23
    Path:     /usr/lib/docker/cli-plugins/docker-extension
  feedback: Provide feedback, right in your terminal! (Docker Inc.)
    Version:  v1.0.4
    Path:     /usr/lib/docker/cli-plugins/docker-feedback
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v1.1.0
    Path:     /usr/lib/docker/cli-plugins/docker-init
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     /usr/lib/docker/cli-plugins/docker-sbom
  scout: Docker Scout (Docker Inc.)
    Version:  v1.6.3
    Path:     /usr/lib/docker/cli-plugins/docker-scout

 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 2
 Server Version: 26.0.0
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
   Profile: unconfined
 Kernel Version: 6.6.22-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 1.833GiB
 Name: docker-desktop
 ID: f387d761-0daf-48e0-abd0-6ba9154150f1
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Experimental: false
 Insecure Registries:
 Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

I have just update my build.sh (see github repository) with the following to force the use of buildkit:

DOCKER_BUILDKIT=1 docker build . \
    --build-arg BASE_IMAGE=${expanded_base_image} \
    --build-arg USE_APT_PROXY=${proxy} \
    -t jsa1987/minidlna-yamaha-avr:$tag \
    --no-cache --pull --force-rm

I have then pushed the new build image to the docker hub and tried to re-deploy it from there. But I still have the same behavior:

root@minidlna:/app/bin# ls -al
total 16
drwxr-xr-x 1 root root 4096 Apr 13 16:32 .
drwxr-xr-x 1 root root 4096 Apr 13 16:31 ..
-rw-r--r-- 1 root root 4899 Apr 13 16:29 run-minidlna.sh

Docker Desktop uses buildkit by default for a long time and enabling/disabling it with the DOCKER_BUILDKIT is deprecated and could be removed later. So unless you have an entrypoint or command that changes the permissions, it should work. I tried your repository and everything worked. Maybe the only difference is that I used macOS, not Linux.

@rimelek just want to ensure I understand this correctly… you downloaded the repository from GitHub, you were able to build and deploy the container and run it with run-minidlna.sh as entry point?

If this is the case then the problem is most probably with how I build the image…

Yes, I could build the image and run it without any error. And yes, I cloned the GitHub repo and used the default base image

Today I spend some time and I finally got this to work. The COPY --chmod=755 command was indeed the solution. The reason why this appeared to not be working in my case was because I was pushing the new image with the “local” tag and I was pulling it with the “latest” tag…
In short I was not testing the latest build after I made the suggested change to use COPY --chmod=755

1 Like