Access Mounted Volumes On Ubuntu

Hello,

So deciding after a couple days and soo many searches without little luck to see If someone might be able to help me.

I Have a Synology NAS & 3 Folders
I enabled NFS on all 3 Directories the Permissions are explicit to the ubuntu Server, and the Mapping I have tried No Mapping, Map root to Admin, Map root to guest. (Currently on No Mapping)

On the Ubuntu Box I mounted the 3 folders as such in the /mnt/synology/ directory using the following command

sudo mount -t nfs 192.168.1.216:/volume1/TV_Shows /mnt/synology/TV_Shows

Everything is Available but the permissions are root:root 777, not typically how I like to do things but nothing to important in those directories and the Server is behind a reverse proxy and Google Auth. Though if a better command to mount than what I am using, I’m up for suggestions.

This is the file currently “Working”

services:
  sonarr:
    image: lscr.io/linuxserver/sonarr:latest # Or your preferred tag
    container_name: sonarr
    security_opt:
      - no-new-privileges:true
    environment:
      - PUID=1026 # Replace with your user ID
      - PGID=101 # Replace with your group ID
      - TZ=America/Los_Angeles # Or your timezone
    volumes:
      - $DOCKERDIR/appdata/sonarr:/config
      - /mnt/synology/download:/download
      - /mnt/synology/TV_Shows:/TV_Shows
    ports:
      - 8989:8989
    restart: unless-stopped
    networks:
      - t3_proxy

Though the problem is with Sonarr. When I try accessing the drives there are permissions issues preventing me from accessing, I can easily override this by setting the PUID & PGID to that of the User in the Synology Box, but the User on the Ubuntu Box being the Typical 1000/1000 I have no idea if that will cause me headaches or Roadblocks down the road.

I have tried this Option for the sonarr.yml file

services:
  # Sonarr - TV Shows Management
  # Set url_base in sonarr settings if using PathPrefix
  sonarr:
    image: lscr.io/linuxserver/sonarr:develop
    container_name: sonarr
    security_opt:
      - no-new-privileges:true
    restart: "no"
    #profiles: ["media", "arrs", "all"]
    networks:
      - t3_proxy
    ports:
      - "8989:8989"
    volumes:
      - $DOCKERDIR/appdata/sonarr:/config
      - nfs_downloads:/download
      - nfs_media:/TV_Shows
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      TZ: $TZ
      PUID: $PUID
      PGID: $PGID

    volumes:
      sonarr_config:
      nfs_downloads:
        driver: local
        driver_opts:
          type: nfs
          o: nfsvers=4,addr=192.168.1.216,rw
          device: ":/volume1/download" # Replace with your NFS share path
      nfs_media:
        driver: local
        driver_opts:
          type: nfs
          o: nfsvers=4,addr=192.168.1.216,rw
          device: ":/volume1/TV_Shows"

Though when using that, the logs show no errors, the container starts but just a 404 error on the screen.

I keep reading something about mounting the Drives into docker as Volumes and then sharing that with all containers in the Stack, but can’t really tell if that is a better way or explicitly how to do that, What would My sonarr.yml need to look like, and what would my Docker-Compose.yml need in it? and would I need to mount the drives to a directory ahead of time as mentioned in that command above? Or is there a better way to mount the drive with proper permissions, as it stands not even root can chown or chmod those directories.

any assistance would be appreciated.

Thanks!

Ubuntu 24.04.2 LTS / Synology DSM 7.6

nfs and binds share the same dilemma: you need to make sure the uid:gid that executes the process inside the container matches the uid:gid of the folder/remote share.

Your volume declaration look almost like mine. Just with the difference that I didn’t specify the driver, because local is the default driver anyway, and I didn’t set the option rw because it’s also the default setting.

The uid:gid of the Ubuntu server should be irrelevant. The setting on the share that works for me with nfsv4 shares on Synology is “squash” = “no mapping”. Works like a charm since more than 6 years.

Just out of curiosity: are you sure about DSM7.6? The latest version these days should be 7.2.2-72806 Update 3.

Thank you for your Response, you nailed in on the head with the dilemma (getting the PUID/GUID to Match). I had no problems switching the containers to 0 for root, but struggling with finding any documentation on this.

So you say your file works? Would you mind pasting it just a few of the categories no personal stuff. I want to look at the indentation of the Volumes as I have managed playing around to not get it to flag me on duplicates, with no success but most the time i tried it’s constantly warning me.

In the meantime I found rClone to fix the issue using smb instead, why it works have no clue, the shared directories i was using before i changed permissions on mounting before mounting, as i did with this one, but in this instance it just works and allows the proper user to connect within docker.

Again thank you for your time and consideration.

I am not sure what you hope to see, but this is how my nfsv4 and volumes look like:

volumes:

  # nfs example
  random-nfs-share:
    driver_opts:
      type: nfs 
      o: addr=192.168.200.x,nfsvers=4
      device: :/volume1/random-share

  # cifs example
  random-cifs-share:
    driver_opts:
      type: cifs 
      o: username=<username on nas>,password=<password of the nas user>,uid=<uid of the nas user>,gid=<gid of the nas user>,vers=3.0
      device: //192.168.200.x/random-share

I usually go with nfsv4, unless the application in the container requires file operations like move to be atomic operations, then I fall back to cifs. With cifs there is no way around to have the credentials plaintext in the options.

2 Likes