Losing my mind over docker swarm NFS mounts

I know it might not be perfect and I have portainer and other services running without issue. This image requires lots of different subdirectories. Based on my admittedly limit knowledge ht best approach here was discrete volumes and I cannot use subpath due to swarm being utilised.

I have stripped the volumes back to simplicity (again similar mounts are present from the same docker hosts to the same containers and I have validated each of them mounted directly from the docker host on each node via mount -t nfs … and had multiple mounted simultaneously) however when trying to use from image I always get (seems to be random on which volume)

failed to populate volume: error while mounting volume '/var/lib/docker/volumes/plex_dmb_riven_mnt/_data': failed to mount local volume: mount :/var/nfs/shared/plex_data/dmb/riven/mnt:/var/lib/docker/volumes/plex_dmb_riven_mnt/_data, flags: 0x400, data: addr=192.168.110.5,rsize=8192,wsize=8192,tcp,timeo=14: invalid argument

Volume definitions

It may be important so the root i.e. exported mount root is /var/nfs/shared/media_data

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/config" plex_dmb_config

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/log" plex_dmb_log

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/zurg/rd" plex_dmb_zurg_rd

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/zurg/mnt" plex_dmb_zurg_mnt

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/riven/data" plex_dmb_riven_data

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/riven/mnt" plex_dmb_riven_mnt

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/postgresql" plex_dmb_postgres_data

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/pgadmin4" plex_dmb_pgadmin4_data

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/zilean" plex_dmb_zilean_data

docker volume create --driver local \
  --opt type=nfs \
  --opt o=" addr=192.168.110.5,rw" \
  --opt device=":/var/nfs/shared/media_data/dmb/plex_debrid" plex_dmb_plexdebrid

Compose

services:
  media-bridge:
    image: iampuid0/dmb:latest 
    volumes:
      - plex_dmb_config:/config                     ## Location of configuration files. If a Zurg config.yml and/or Zurg app is placed here, it will be used to override the default configuration and/or app used at startup.
      - plex_dmb_log:/log                           ## Location for logs
      - plex_dmb_zurg_rd:/zurg/RD                   ## Location for Zurg RealDebrid active configuration
      - plex_dmb_zurg_mnt:/zurg_data                     ## Location for rclone mount to host
      - plex_dmb_riven_data:/riven/backend/data     ## Location for Riven backend data
      - plex_dmb_riven_mnt:/riven_mnt                     ## Location for Riven symlinks
      - plex_dmb_postgres_data:/postgres_data       ## Location for PostgreSQL database
      - plex_dmb_pgadmin4_data:/pgadmin/data        ## Location for pgAdmin 4 data
      - plex_dmb_zilean_data:/zilean/app/data       ## Location for Zilean data
      - plex_dmb_plexdebrid:/plex_debrid/config    ## Location for plex_debrid data
    ports:
      - "3005:3005"                                                 ## DMB Frontend
      - "3000:3000"                                                 ## Riven Frontend
      - "5050:5050"                                                 ## pgAdmin 4
    devices:
      - /dev/fuse:/dev/fuse:rwm
    cap_add:
      - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
      - no-new-privileges

volumes:
  plex_dmb_config:
    external: true
  plex_dmb_log:
    external: true
  plex_dmb_zurg_rd:
    external: true
  plex_dmb_zurg_mnt:
    external: true
  plex_dmb_riven_data:
    external: true
  plex_dmb_riven_mnt:
    external: true
  plex_dmb_postgres_data:
    external: true
  plex_dmb_pgadmin4_data:
    external: true
  plex_dmb_zilean_data:
    external: true
  plex_dmb_plexdebrid:
    external: true

Remove the volume and try this:

docker volume create --driver local \
  --opt type=nfs \
  --opt o="addr=192.168.110.5,nfsvers=4" \
  --opt device=":/var/nfs/shared/media_data/dmb/plex_debrid" \
   plex_dmb_plexdebrid

I added nfsvers=4 as nowadys it’s the default, and removed rw, as nfs volumes seem to have problems with it. They are mounted in read-write mode anyway.

No luck unfortunately, same issue persists. Any other ideas?

Attempted run on node2

Contents of volume on that node (1 specifically the one that marked failed in logs)

Volume definition

root@docker0102:/var/lib/docker/volumes/acme/_data# docker volume inspect plex_dmb_zurg_rd
[
    {
        "CreatedAt": "2025-04-26T11:31:04Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/plex_dmb_zurg_rd/_data",
        "Name": "plex_dmb_zurg_rd",
        "Options": {
            "device": ":/var/nfs/shared/media_data/dmb/zurg/rd",
            "o": " addr=192.168.110.5",
            "type": "nfs"
        },
        "Scope": "local"
    }
]

Associated runtime error

{

AppArmorProfile:"",

Args:[

"-c",

". /venv/bin/activate && python /main.py"

],

Config:{

AttachStderr:false,

AttachStdin:false,

AttachStdout:false,

Cmd:null,

Domainname:"",

Entrypoint:[

"/bin/bash",

"-c",

". /venv/bin/activate && python /main.py"

],

Env:[

"PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",

"DEBIAN_FRONTEND=noninteractive",

"XDG_CONFIG_HOME=/config",

"TERM=xterm"

],

Healthcheck:{

Interval:60000000000,

Test:[

"CMD",

"/bin/bash",

"-c",

". /venv/bin/activate && python /healthcheck.py"

],

Timeout:10000000000

},

Hostname:"eea0375ee033",

Image:"iampuid0/dmb:latest@sha256:ea1b0fb1504d7d2a8032a6dd58b6f6e325635dfa1674e2fdc0aa2d39adedb3e8",

Labels:{

com.docker.stack.namespace:"plex",

com.docker.swarm.node.id:"y8sgb5u6oapxclhqcaqpew4v1",

com.docker.swarm.service.id:"vmt93nsyjq94dhin3rmbohlq4",

com.docker.swarm.service.name:"plex_media-bridge",

com.docker.swarm.task:"",

com.docker.swarm.task.id:"02naczopudiouznfvhseumehv",

com.docker.swarm.task.name:"plex_media-bridge.1.02naczopudiouznfvhseumehv",

description:"Debrid Media Bridge",

maintainer:"I-am-PUID-0",

name:"DMB",

org.opencontainers.image.ref.name:"ubuntu",

org.opencontainers.image.version:"24.04",

url:"https://github.com/I-am-PUID-0/DMB"

},

OnBuild:null,

OpenStdin:false,

StdinOnce:false,

Tty:false,

User:"",

Volumes:null,

WorkingDir:"/"

},

Created:"2025-04-26T11:33:13.265092461Z",

Driver:"overlay2",

ExecIDs:null,

GraphDriver:{

Data:{

ID:"eea0375ee0335bdefb97976126ea045e7c04432ca4190b94c4197974e13819af",

LowerDir:"/var/lib/docker/overlay2/8dc17ea5ffa2e34aa3e65163b2ac117272d3406482871461abe44333c8a64ff7-init/diff:/var/lib/docker/overlay2/b242bedc5927a8e87fcca758e2c4caa7c080c84a51dbbd5ca09b1fa3e5f4b4b9/diff:/var/lib/docker/overlay2/c9df78ceff42e276113cacd57bc3d662afc335840794dea9506be372b0fe8633/diff:/var/lib/docker/overlay2/5066df0822ec5a282a80e7ddd07dab73fc0c91a80805886cfbb49855584ecf82/diff:/var/lib/docker/overlay2/cc9aed8b2ae1058e3e6f34ef82ca7ba3295560dce3f82eece0d0365e2e5b845f/diff:/var/lib/docker/overlay2/867ccbb672242241ee32445a852ada07177fdc291379494d9cbaf351a3ed0fe4/diff:/var/lib/docker/overlay2/c601e5f10df4d34333a1d36f50964ac3de4c80da3fb7ef463c3276b3a07c858f/diff:/var/lib/docker/overlay2/663777beefe2175755ab4d69d1039102e511e04fabe4603ca34d42db28ed9db6/diff:/var/lib/docker/overlay2/6e63f4eaa0238a8be60fb6d34110e7125067b8e6604be5f7bb68480404131ce7/diff:/var/lib/docker/overlay2/da2ec6308b2d23387b68319df55d98389266c4d1b119116b8e1908d5dafb259d/diff:/var/lib/docker/overlay2/82810dcefb10ce421e5c7ce3ec3122edec6baeb75f51842c70663c7328f2727f/diff:/var/lib/docker/overlay2/ded37d721ab943999cd372d245c77cc8a17e5c0f6f976bdc7d05d5279ce017a9/diff:/var/lib/docker/overlay2/e1436c4d7d1ae6e973c3078a4c1c578ae0b4e4795bc76b15c3921e9aeda245d5/diff:/var/lib/docker/overlay2/d83870b45c167cd6d7325a7a285659866de1c2078bdf9f91d504fe3c1ea18c20/diff:/var/lib/docker/overlay2/1a51498e3aa1918c519ce419d0295e5a65ecff149d6ec425d3bd9de73a1cba65/diff:/var/lib/docker/overlay2/2576dba3731efd0493783791470ec6c8be3d823d921e1cfca0070106e2b269cc/diff:/var/lib/docker/overlay2/5e5991bc2fcc5323a92cfc1c12a3026f6215b4d0b11145a9aebc884cab26d233/diff:/var/lib/docker/overlay2/60672931b08ffb53b777c1ded2164d1cad5b4be4d81dc50a6231e6998472deb9/diff:/var/lib/docker/overlay2/b7f4251a06869d4ab743fa42f70440f646bd42c3a472df0d96078800fd83bd32/diff",

MergedDir:"/var/lib/docker/overlay2/8dc17ea5ffa2e34aa3e65163b2ac117272d3406482871461abe44333c8a64ff7/merged",

UpperDir:"/var/lib/docker/overlay2/8dc17ea5ffa2e34aa3e65163b2ac117272d3406482871461abe44333c8a64ff7/diff",

WorkDir:"/var/lib/docker/overlay2/8dc17ea5ffa2e34aa3e65163b2ac117272d3406482871461abe44333c8a64ff7/work"

},

Name:"overlay2"

},

HostConfig:{

AutoRemove:false,

Binds:null,

BlkioDeviceReadBps:null,

BlkioDeviceReadIOps:null,

BlkioDeviceWriteBps:null,

BlkioDeviceWriteIOps:null,

BlkioWeight:0,

BlkioWeightDevice:null,

CapAdd:null,

CapDrop:null,

Cgroup:"",

CgroupParent:"",

CgroupnsMode:"private",

ConsoleSize:[

0,

0

],

ContainerIDFile:"",

CpuCount:0,

CpuPercent:0,

CpuPeriod:0,

CpuQuota:0,

CpuRealtimePeriod:0,

CpuRealtimeRuntime:0,

CpuShares:0,

CpusetCpus:"",

CpusetMems:"",

DeviceCgroupRules:null,

DeviceRequests:null,

Devices:null,

Dns:null,

DnsOptions:null,

DnsSearch:null,

ExtraHosts:null,

GroupAdd:null,

IOMaximumBandwidth:0,

IOMaximumIOps:0,

IpcMode:"private",

Isolation:"default",

Links:null,

LogConfig:{

Config:{},

Type:"json-file"

},

MaskedPaths:[

"/proc/asound",

"/proc/acpi",

"/proc/interrupts",

"/proc/kcore",

"/proc/keys",

"/proc/latency_stats",

"/proc/timer_list",

"/proc/timer_stats",

"/proc/sched_debug",

"/proc/scsi",

"/sys/firmware",

"/sys/devices/virtual/powercap"

],

Memory:0,

MemoryReservation:0,

MemorySwap:0,

MemorySwappiness:null,

Mounts:[

{

Source:"plex_dmb_config",

Target:"/config",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_log",

Target:"/log",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_zurg_rd",

Target:"/zurg/RD",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_zurg_mnt",

Target:"/zurg_data",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_riven_data",

Target:"/riven/backend/data",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_riven_mnt",

Target:"/riven_mnt",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_postgres_data",

Target:"/postgres_data",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_pgadmin4_data",

Target:"/pgadmin/data",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_zilean_data",

Target:"/zilean/app/data",

Type:"volume",

VolumeOptions:{}

},

{

Source:"plex_dmb_plexdebrid",

Target:"/plex_debrid/config",

Type:"volume",

VolumeOptions:{}

}

],

NanoCpus:0,

NetworkMode:"bridge",

OomKillDisable:null,

OomScoreAdj:0,

PidMode:"",

PidsLimit:null,

PortBindings:{},

Privileged:false,

PublishAllPorts:false,

ReadonlyPaths:[

"/proc/bus",

"/proc/fs",

"/proc/irq",

"/proc/sys",

"/proc/sysrq-trigger"

],

ReadonlyRootfs:false,

RestartPolicy:{

MaximumRetryCount:0,

Name:"no"

},

Runtime:"runc",

SecurityOpt:null,

ShmSize:67108864,

UTSMode:"",

Ulimits:[],

UsernsMode:"",

VolumeDriver:"",

VolumesFrom:null

},

HostnamePath:"/var/lib/docker/containers/eea0375ee0335bdefb97976126ea045e7c04432ca4190b94c4197974e13819af/hostname",

HostsPath:"/var/lib/docker/containers/eea0375ee0335bdefb97976126ea045e7c04432ca4190b94c4197974e13819af/hosts",

Id:"eea0375ee0335bdefb97976126ea045e7c04432ca4190b94c4197974e13819af",

Image:"sha256:7394bb68f5e32e43cdf22fac6afb374074568b113140d52b35766739bf0d7a5e",

LogPath:"",

MountLabel:"",

Mounts:[

{

Destination:"/log",

Driver:"local",

Mode:"z",

Name:"plex_dmb_log",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_log/_data",

Type:"volume"

},

{

Destination:"/zurg/RD",

Driver:"local",

Mode:"z",

Name:"plex_dmb_zurg_rd",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_zurg_rd/_data",

Type:"volume"

},

{

Destination:"/zurg_data",

Driver:"local",

Mode:"z",

Name:"plex_dmb_zurg_mnt",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_zurg_mnt/_data",

Type:"volume"

},

{

Destination:"/riven/backend/data",

Driver:"local",

Mode:"z",

Name:"plex_dmb_riven_data",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_riven_data/_data",

Type:"volume"

},

{

Destination:"/zilean/app/data",

Driver:"local",

Mode:"z",

Name:"plex_dmb_zilean_data",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_zilean_data/_data",

Type:"volume"

},

{

Destination:"/plex_debrid/config",

Driver:"local",

Mode:"z",

Name:"plex_dmb_plexdebrid",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_plexdebrid/_data",

Type:"volume"

},

{

Destination:"/config",

Driver:"local",

Mode:"z",

Name:"plex_dmb_config",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_config/_data",

Type:"volume"

},

{

Destination:"/riven_mnt",

Driver:"local",

Mode:"z",

Name:"plex_dmb_riven_mnt",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_riven_mnt/_data",

Type:"volume"

},

{

Destination:"/postgres_data",

Driver:"local",

Mode:"z",

Name:"plex_dmb_postgres_data",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_postgres_data/_data",

Type:"volume"

},

{

Destination:"/pgadmin/data",

Driver:"local",

Mode:"z",

Name:"plex_dmb_pgadmin4_data",

Propagation:"",

RW:true,

Source:"/var/lib/docker/volumes/plex_dmb_pgadmin4_data/_data",

Type:"volume"

}

],

Name:"/plex_media-bridge.1.02naczopudiouznfvhseumehv",

NetworkSettings:{

Bridge:"",

EndpointID:"",

Gateway:"",

GlobalIPv6Address:"",

GlobalIPv6PrefixLen:0,

HairpinMode:false,

IPAddress:"",

IPPrefixLen:0,

IPv6Gateway:"",

LinkLocalIPv6Address:"",

LinkLocalIPv6PrefixLen:0,

MacAddress:"",

Networks:{

ingress:{

Aliases:null,

DNSNames:null,

DriverOpts:null,

EndpointID:"",

Gateway:"",

GlobalIPv6Address:"",

GlobalIPv6PrefixLen:0,

GwPriority:0,

IPAMConfig:{

IPv4Address:"10.0.0.10"

},

IPAddress:"",

IPPrefixLen:0,

IPv6Gateway:"",

Links:null,

MacAddress:"",

NetworkID:"rr4a9wzle6kjpj71c1s2n3c31"

},

plex_default:{

Aliases:null,

DNSNames:null,

DriverOpts:null,

EndpointID:"",

Gateway:"",

GlobalIPv6Address:"",

GlobalIPv6PrefixLen:0,

GwPriority:0,

IPAMConfig:{

IPv4Address:"10.0.6.7"

},

IPAddress:"",

IPPrefixLen:0,

IPv6Gateway:"",

Links:null,

MacAddress:"",

NetworkID:"pfz5ep5b5dturc2ml4cpn8fgc"

}

},

Ports:{},

SandboxID:"",

SandboxKey:"",

SecondaryIPAddresses:null,

SecondaryIPv6Addresses:null

},

Path:"/bin/bash",

Platform:"linux",

Portainer:{

ResourceControl:{

Id:16,

ResourceId:"1_plex",

SubResourceIds:[],

Type:6,

UserAccesses:[],

TeamAccesses:[],

Public:false,

AdministratorsOnly:true,

System:false

}

},

ProcessLabel:"",

ResolvConfPath:"/var/lib/docker/containers/eea0375ee0335bdefb97976126ea045e7c04432ca4190b94c4197974e13819af/resolv.conf",

RestartCount:0,

State:{

Dead:false,

Error:"error while mounting volume '/var/lib/docker/volumes/plex_dmb_zurg_rd/_data': failed to mount local volume: mount :/var/nfs/shared/media_data/dmb/zurg/rd:/var/lib/docker/volumes/plex_dmb_zurg_rd/_data, data: addr=192.168.110.5: invalid argument",

ExitCode:128,

FinishedAt:"0001-01-01T00:00:00Z",

OOMKilled:false,

Paused:false,

Pid:0,

Restarting:false,

Running:false,

StartedAt:"0001-01-01T00:00:00Z",

Status:"created"

}

}

Specific error extracted
Error:"error while mounting volume '/var/lib/docker/volumes/plex_dmb_zurg_rd/_data': failed to mount local volume: mount :/var/nfs/shared/media_data/dmb/zurg/rd:/var/lib/docker/volumes/plex_dmb_zurg_rd/_data, data: addr=192.168.110.5: invalid argument",

Confirmation that it can be mounted

root@docker0102:/# mount -t nfs 192.168.110.5://var/nfs/shared/media_data/dmb/zurg/rd /mnt/test
Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /lib/systemd/system/rpc-statd.service.
root@docker0102:/# ls -al /mnt/test
total 4
drwxrwx--- 1  977  988    0 Apr 24 10:54 .
drwxr-xr-x 3 root root 4096 Apr 24 06:54 ..
root@docker0102:/# mount | grep test
192.168.110.5://var/nfs/shared/media_data/dmb/zurg/rd on /mnt/test type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.110.5,mountvers=3,mountport=51195,mountproto=udp,local_lock=none,addr=192.168.110.5)
root@docker0102:/#

Also open to different solution’s if ive taken the silly path here.

My volumes backed by nfs look pretty much look identical, except I have no leading space before addr=.
Seems like that’s what causing the problem.

Furthermore, please shre the output of docker info, so we can learn more about the installed docker version and your runtime environment. It can be also relevant if you run your docker instance in a lxc container.

Update: looks like you didn’t try my variation of your command from the previous post.

Only thing i couldnt do was nfs version 4 as unifi unas pro is currently only v3 everything else is the same?

Ill try the space in the morning

Your mount grep shows mountvers=3. Maybe try that or vers=3 or nfsvers=3 in the Docker volume options as well. I honestly don’t know what the difference is. I think I knew once but I haven’t had to deal with nfs parameters for a while. If the default is not v3, but v4 for example, that would explain why adding nfsvers=4 didn’t change anything.

yep, just skip ,nfsvers=4.

If you mean removing it: then yes!

Just to be sure: docker volumes are immutable, and need to be removed and re-created with the new arguments.

so, the space was the issue :smiley:

I have no idea how it got there or why I couldn’t see it. +1 for peer review.

Thank you sir!

3 Likes