Using smb shares as bind volumes

Hello everybody,

I’m trying to install multiple docker containers via docker compose using a smb share as my bind volume. For example I try to use Pi-Hole (but other containers have had the same problem as well). I used this compose:

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "4002:443/tcp"
    environment:
      TZ: 'Europe/Berlin'
      FTLCONF_webserver_api_password: 'test1234'
      FTLCONF_dns_listeningMode: 'all'
    volumes:
      - './config/data:/etc/pihole'
    cap_add:
      - SYS_TIME
      - SYS_NICE
    restart: unless-stopped

The folder config/data is an smb share I automaticlly mount with fstab:

# Pi-Hole Docker Share
//IP/Databases/Pi-Hole /opt/Pi-Hole/config/data cifs credentials=/etc/samba/admin_credentials,noperm 0 0

When i try to load the container it starts but some files are broken for example the gravity.db has 0 bytes, but the share itself is wrightable since other files get written just fine. The credentials for my smb share also have full access rights. When I bind the volume to a local path all works fine so it’s not the config. Could this be a permission or ownership problem?

Can anyone help me how to correctly use smb shares as bind volumes?

The same would happen, if you run pi on bare metal, but store the data on a remote share.

Some applications require the filesystem to handle file locks properly, which is typical for persisting database related files like the gravity.db in your situation. SMB shares will not propagate file locks from a client to the server, or report them back from the server to a client. From what I remember this should be possible with nfsv4 or newer. Years ago I had a project where we stored postgres data on a nfsv4.1 remote share without any issues (even though it’s not recommended by postgres).

Or the application might depend on file watches to detect changes in the filesystem. Those things are done by mechanisms in the host kernel, which will only be able to do this with a local filesystem, but it won’t be able to do this for a file system mounted through a remote share.

Running such kind of payload in the container, will not make this behavior disappear.
You could try if switching to a nfsv4 remote share solves the issue (I am not 100% certain it will), or you need to bind a path that uses a local filesystem. Of course, starting more than a single pihole container that points to the same host directory or remote share will likely still corrupt the database file.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.