I tried with root PUID and PGID 1000 but also PUID=1050 (the user on my NAS) and PGID=100
Booth did not work.
Error from the running log:
In StreamHandler.php line 146:
The stream or file “/var/www/localhost/heimdall/storage/logs/laravel-2022-07-22.log” could not be opened in append mode: failed to open stream: >Permission denied
The exception occurred while attempting to log: The stream or file “/var/www/localhost/heimdall/storage/logs/laravel-2022-07-22.log” could not be open ed in append mode: failed to open stream: Permission denied
The exception occurred while attempting to log: SQLSTATE[HY000]: General error: 8 attempt to write a readonly database (SQL: create table “migrations” (“id” integer not null primary key autoincrement, “migration” varchar not null, “batch” integer not null))
Context: {“exception”:{“errorInfo”:[“HY000”,8,“attempt to write a readonly
database”]}}
Context: {“exception”:{}}
You need to make sure you credentials you use actually allows accessing folders and files in the share. You mount the cifs share locally with uid1050 and gid 100, so your PUID/PGID environment should reflect those ids as well.
That said, try PUID=1050 and PGID=100.
To be honest, instead of using cifs, I would enable nfsv4 on the NAS and use docker named volumes backed by the remote nfsv4 share instead.
Failure
failed to deploy a stack: Container 9085c08b18b1_heimdall Recreate Error response from daemon: failed to mount local volume: mount :/volume1/backup/mrkdock01/volumes/heimdall:/var/lib/docker/volumes/heimdall_nfs/_data, data: addr=10.0.0.3: no such file or directory
The Hostname points to my subnet where my docker nodes are. Instead of Squash all uid/gid to the admin, I leave them as they are an make sure my container uses the same uid/gid. Regarding Securtiy: do you realy use kerberos?
I declare the named volume in the compose file like this:
That looks great. Would you mind sharing an original compose file with at least two containers, so I can learn the setting?
I would really appreciate it.
Your last compose file already does it correct: declare the named volume, use the named volume in a service. The only note I have is to change the version to “2.4” and off course allign the parameters of the nfs named volume, with what I provided in my lats post.
Forgive me, if I don’t share any of my compose files - there wouldn’t be anything different than what you already use in your last compose file…
Deployment error
failed to deploy a stack: Network heimdall_default Creating Network heimdall_default Created Container heimdall Creating Error response from daemon: failed to mount local volume: mount :/volume1/backup/mrkdock01/volumes/heimdall:/var/lib/docker/volumes/heimdall_nfs/_data, data: addr=10.0.0.3: no such file or directory
I’m sorry to ask this persistent, but I’m a beginner and eager to learn. So thank you for your help.
Does the backup share realy exist on /volume1 and does it realy have the subfolders mrkdock01-volumes/heimdall in it? I am not sure if the dash in the device string is problematic, try quoting the value, if it doesn’t help create a folder without a dash in it and try again.
I am also uncertain about the rw in your “o” line. I don’t use it there, as you already configured rw at the share level in your syno AND can controll it in the volume section of your service by just appending :rw to the end.
Appart of that: the compose file looks good to me.
Removing the rw did the trick. Oddly enough, it didn’t work at the beginning. Only after I had created a volume manually and added a container without compose it worked, and after that also with compose.
So thank you again and have a great weekend. Greetings from Switzerland to Germany.
I ment to mention that docker volumes are immutable. Once created, configuration changes to the volume declaration in the compose file won’t be applied to the docker volume. You need to manualy remove the volume and let docker-compose (re-)create it using the new settings. With a remote share baked volume, the volume itself is simply just a handle to store the configuration - deleting it won’t delete data in the remote share.
Wish you a charming weekend as well. Greetings back to Switzerland
I was able to actually get it to work, but with some I have permission issues.
i.E.
Paperless-ngx docker container starting…
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Waiting for PostgreSQL to start…
Waiting for Redis: redis://broker:6379
Connected to Redis broker: redis://broker:6379
Apply database migrations…
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/consume
to be writeable by the user running the Paperless services
?: PAPERLESS_MEDIA_ROOT is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/media
to be writeable by the user running the Paperless services
Do I need to change something on my NAS or on the compose config?
According paperless-ngx docs, the environment variables USERMAP_UID and USERMAP_GID are correct. Next we can check if the owner of the folders match to the same UID:GID.
Please execute following commands on the nas in a ssh terminal:
stat --format="%u:%g %a" /volume1/dockervol/paperless/data
stat --format="%u:%g %a" /volume1/dockervol/paperless/media
stat --format="%u:%g %a" /volume1/dockervol/paperless/export
stat --format="%u:%g %a" /volume1/dockervol/paperless/consume
This should be fine. This is one of the containers that starts as root, chowns the folder, and starts the application process as the restricted user.
I am not sure why the output of the error message suggested 777, but you could give it a try. On the other hand,the error message was raised for everything data…
I am starting to feel this is a problem caused by ACL’s. When setting permissions using the UI, Syno sets ACL’s instead of unix file permissions. Even though docker only cares for the unix file permissions, it could be the Synology preventing the container to actualy access the folder due to ACL. I removed the ACL’s on all folders that I use as NFS export.
I am trying to find notes how to use the “synoacltool” to remove ACL’s from the cli, but I couln’t find them yet.
Update:
You can check the acl’s for /volume1/dockervol/paperless with (must be executed as root!):
Adjusting permissions of paperless files. This may take a while.
Waiting for PostgreSQL to start…
Waiting for Redis: redis://broker:6379
Connected to Redis broker: redis://broker:6379
Apply database migrations…
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/consume
to be writeable by the user running the Paperless services
?: PAPERLESS_MEDIA_ROOT is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/media
to be writeable by the user running the Paperless services
The error message is odd. Instead of (synoacltool.c, 588)Unknown error you should see (synoacltool.c, 359)It's Linux mode after deleting the acl and re-reading them.
There error realy doesn’t make sense, as the container’s USERMAP_UID and USERMAP_GID already matcher the owner UID/GID of the directory. On top, the container starts with root permissions and tries to chwon the folders to the UID/GID provided by the USERMAP variables.
This is a riddle. I can try if I am able to reproduce the problem later.