Docker volume via CIFS fstab on Synology NAS

It seems that I don’t have write access on the NAS since the containers are giving me write errors.

Setup:
Docker Host: Ubuntu 22.04 LTS
Mount to NAS: fstab
//10.0.0.3/docker-volumes](https://10.0.0.3/docker-volumes) /docker/volumes cifs credentials=/.smbcredentials,uid=1050,gid=100,vers=2.0

The mount works and I’m able to create files from the ssh

The user that makes the mount has full permission on the NAS, but it seems that Docker need also permission somehow.

I tried with an easy one:

version: "2.1"  
services:  
  heimdall:  
image: lscr.io/linuxserver/heimdall  
container\_name: heimdall  
environment:  
\- PUID=1000  
\- PGID=1000  
\- TZ=Europe/Zurich  
volumes:  
\- /docker/volumes/heimdall/config:/config  
ports:  
\- 6080:80  
\- 6443:443  
restart: unless-stopped

I tried with root PUID and PGID 1000 but also PUID=1050 (the user on my NAS) and PGID=100
Booth did not work.

Error from the running log:

In StreamHandler.php line 146:
The stream or file “/var/www/localhost/heimdall/storage/logs/laravel-2022-07-22.log” could not be opened in append mode: failed to open stream: >Permission denied
The exception occurred while attempting to log: The stream or file “/var/www/localhost/heimdall/storage/logs/laravel-2022-07-22.log” could not be open ed in append mode: failed to open stream: Permission denied
The exception occurred while attempting to log: SQLSTATE[HY000]: General error: 8 attempt to write a readonly database (SQL: create table “migrations” (“id” integer not null primary key autoincrement, “migration” varchar not null, “batch” integer not null))
Context: {“exception”:{“errorInfo”:[“HY000”,8,“attempt to write a readonly
database”]}}
Context: {“exception”:{}}

Thank you for all your help.

You need to make sure you credentials you use actually allows accessing folders and files in the share. You mount the cifs share locally with uid1050 and gid 100, so your PUID/PGID environment should reflect those ids as well.

That said, try PUID=1050 and PGID=100.

To be honest, instead of using cifs, I would enable nfsv4 on the NAS and use docker named volumes backed by the remote nfsv4 share instead.

Thank you for your suggestion. I like to try it with nsfv4. How would I create the mount? Do I need to specify it on every container?

Now I receive the following:

Failure
failed to deploy a stack: Container 9085c08b18b1_heimdall Recreate Error response from daemon: failed to mount local volume: mount :/volume1/backup/mrkdock01/volumes/heimdall:/var/lib/docker/volumes/heimdall_nfs/_data, data: addr=10.0.0.3: no such file or directory

version: "2.1"
volumes:
  nfs:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,rw
      device: :/volume1/backup/mrkdock01/volumes/heimdall
services:
  heimdall:
    image: lscr.io/linuxserver/heimdall
    container_name: heimdall
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Zurich
    volumes:
      - nfs:/config
    ports:
      - 6080:80
      - 6443:443
    restart: always

Screenshot 2022-07-22 at 16.07.03

My syno shares use:

The Hostname points to my subnet where my docker nodes are. Instead of Squash all uid/gid to the admin, I leave them as they are an make sure my container uses the same uid/gid. Regarding Securtiy: do you realy use kerberos?

I declare the named volume in the compose file like this:

volumes:
  myvolume:
    driver_opts:
      type: nfs 
      o: addr=192.168.200.19,nfsvers=4
      device: :/volume1/myshare/subfolder

Since years, it works like a charme like this.

That looks great. Would you mind sharing an original compose file with at least two containers, so I can learn the setting?
I would really appreciate it.

Thank you!

Your last compose file already does it correct: declare the named volume, use the named volume in a service. The only note I have is to change the version to “2.4” and off course allign the parameters of the nfs named volume, with what I provided in my lats post.

Forgive me, if I don’t share any of my compose files - there wouldn’t be anything different than what you already use in your last compose file…

1 Like

No problem, I only use something to test, so I don’t mind sharing it.

I used now this compose file and got the below error.

version: "2.4"
volumes:
  nfs:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,rw,nfsvers=4
      device: :/volume1/backup/mrkdock01-volumes/heimdall
services:
  heimdall:
    image: lscr.io/linuxserver/heimdall
    container_name: heimdall
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Zurich
    volumes:
      - nfs:/config
    ports:
      - 6080:80
      - 6443:443
    restart: always

Error:

Deployment error
failed to deploy a stack: Network heimdall_default Creating Network heimdall_default Created Container heimdall Creating Error response from daemon: failed to mount local volume: mount :/volume1/backup/mrkdock01/volumes/heimdall:/var/lib/docker/volumes/heimdall_nfs/_data, data: addr=10.0.0.3: no such file or directory

Screenshot 2022-07-22 at 21.29.39

I’m sorry to ask this persistent, but I’m a beginner and eager to learn. :slight_smile: So thank you for your help.

Does the backup share realy exist on /volume1 and does it realy have the subfolders mrkdock01-volumes/heimdall in it? I am not sure if the dash in the device string is problematic, try quoting the value, if it doesn’t help create a folder without a dash in it and try again.

I am also uncertain about the rw in your “o” line. I don’t use it there, as you already configured rw at the share level in your syno AND can controll it in the volume section of your service by just appending :rw to the end.

Appart of that: the compose file looks good to me.

1 Like

Removing the rw did the trick. Oddly enough, it didn’t work at the beginning. Only after I had created a volume manually and added a container without compose it worked, and after that also with compose.

So thank you again and have a great weekend. Greetings from Switzerland to Germany.

Pleasure to be of help.

I ment to mention that docker volumes are immutable. Once created, configuration changes to the volume declaration in the compose file won’t be applied to the docker volume. You need to manualy remove the volume and let docker-compose (re-)create it using the new settings. With a remote share baked volume, the volume itself is simply just a handle to store the configuration - deleting it won’t delete data in the remote share.

Wish you a charming weekend as well. Greetings back to Switzerland :slight_smile:

1 Like

I was able to actually get it to work, but with some I have permission issues.
i.E.

Paperless-ngx docker container starting…
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Waiting for PostgreSQL to start…
Waiting for Redis: redis://broker:6379
Connected to Redis broker: redis://broker:6379
Apply database migrations…
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/consume
to be writeable by the user running the Paperless services
?: PAPERLESS_MEDIA_ROOT is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/media
to be writeable by the user running the Paperless services

Do I need to change something on my NAS or on the compose config?

Could be both, please always add the compose file as well.

Error messages of a container alone just indicate “there is a specific problem”, but are not realy helpfull to understand the origin of the problem.

version: "3.4"
volumes:
  redis:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/redis
  postgresql:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/postgresql
  data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/data      
  media:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/media
  export:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/export
  consume:
    driver: local
    driver_opts:
      type: nfs
      o: addr=10.0.0.3,nfsvers=4
      device: :/volume1/dockervol/paperless/consume

services:
  broker:
    image: redis:6.0
    restart: unless-stopped
    volumes:
      - redis:/data

  db:
    image: postgres:13
    restart: unless-stopped
    volumes:
      - postgresql:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: paperless
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mySecret

  webserver:
    image: ghcr.io/paperless-ngx/paperless-ngx:latest
    restart: unless-stopped
    depends_on:
      - db
      - broker
    ports:
      - 8066:8000
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000"]
      interval: 30s
      timeout: 10s
      retries: 5
    volumes:
      - data:/usr/src/paperless/data
      - media:/usr/src/paperless/media
      - export:/usr/src/paperless/export
      - consume:/usr/src/paperless/consume
    environment:
      PAPERLESS_REDIS: redis://broker:6379
      PAPERLESS_DBHOST: db
      PAPERLESS_OCR_LANGUAGE: deu
      PAPERLESS_URL: https://mydomain.com
      PAPERLESS_SECRET_KEY: mySecret
      PAPERLESS_TIME_ZONE: Europe/Zurich
      USERMAP_UID: 1000
      USERMAP_GID: 100

According paperless-ngx docs, the environment variables USERMAP_UID and USERMAP_GID are correct. Next we can check if the owner of the folders match to the same UID:GID.

Please execute following commands on the nas in a ssh terminal:

stat --format="%u:%g %a" /volume1/dockervol/paperless/data
stat --format="%u:%g %a" /volume1/dockervol/paperless/media
stat --format="%u:%g %a" /volume1/dockervol/paperless/export
stat --format="%u:%g %a" /volume1/dockervol/paperless/consume

The output shows: UID:GID permissions

UID:GID
1000:100
Then at the end for data 755 and for all others 777

This should be fine. This is one of the containers that starts as root, chowns the folder, and starts the application process as the restricted user.

I am not sure why the output of the error message suggested 777, but you could give it a try. On the other hand,the error message was raised for everything data

I am starting to feel this is a problem caused by ACL’s. When setting permissions using the UI, Syno sets ACL’s instead of unix file permissions. Even though docker only cares for the unix file permissions, it could be the Synology preventing the container to actualy access the folder due to ACL. I removed the ACL’s on all folders that I use as NFS export.

I am trying to find notes how to use the “synoacltool” to remove ACL’s from the cli, but I couln’t find them yet.

Update:

You can check the acl’s for /volume1/dockervol/paperless with (must be executed as root!):

find /volume1/dockervol/paperless -exec echo "{}:" \; -exec synoacltool -get {} \;

And remove the acl’s with:

find /volume1/dockervol/paperless  -exec echo "{}:" \; -exec synoacltool -del {} \;

I used /volume1/dockervol/paperless to make sure it does not carry acl’s that become inherited to the subfolders.

Still the same:
grafik

Paperless-ngx docker container starting…

Creating directory /tmp/paperless

Adjusting permissions of paperless files. This may take a while.
Waiting for PostgreSQL to start…
Waiting for Redis: redis://broker:6379
Connected to Redis broker: redis://broker:6379
Apply database migrations…
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/consume
to be writeable by the user running the Paperless services
?: PAPERLESS_MEDIA_ROOT is not writeable
HINT: Set the permissions of
drwxrwxrwx /usr/src/paperless/src/…/media
to be writeable by the user running the Paperless services

The error message is odd. Instead of (synoacltool.c, 588)Unknown error you should see (synoacltool.c, 359)It's Linux mode after deleting the acl and re-reading them.

There error realy doesn’t make sense, as the container’s USERMAP_UID and USERMAP_GID already matcher the owner UID/GID of the directory. On top, the container starts with root permissions and tries to chwon the folders to the UID/GID provided by the USERMAP variables.

This is a riddle. I can try if I am able to reproduce the problem later.

No problem, I appreciate your help, but don’t put to much effort in it. I will later try some other staks and see if they work.

Thank you!