Can only create one loop mount inside of container

i want to create a bunch of “mock” block devices to mimic disks inside of a container. there is an odd situation where i can only mount one new loop per container start. so, the first one will create 1 new mount, and the rest will fail. if i stop and start the container it will mount the first one and one more new one and the rest will fail until my script is done mounting the needed devices.

in my image i have:

RUN mkdir -p /mnt/loops \
  && mkdir -p /mnt/snapraid/disk{00..03} \
  && mkdir -p /mnt/snapraid/parity{00..01} \
  && dd if=/dev/zero of=/mnt/loops/disk00.img bs=1 count=0 seek=200M \
  && dd if=/dev/zero of=/mnt/loops/disk01.img bs=1 count=0 seek=200M \
  && dd if=/dev/zero of=/mnt/loops/disk02.img bs=1 count=0 seek=200M \
  && dd if=/dev/zero of=/mnt/loops/disk03.img bs=1 count=0 seek=200M \
  && dd if=/dev/zero of=/mnt/loops/parity00.img bs=1 count=0 seek=250M \
  && dd if=/dev/zero of=/mnt/loops/parity01.img bs=1 count=0 seek=250M \
  && mkfs.ext4 -q /mnt/loops/disk00.img \
  && mkfs.ext4 -q /mnt/loops/disk01.img \
  && mkfs.ext4 -q /mnt/loops/disk02.img \
  && mkfs.ext4 -q /mnt/loops/disk03.img \
  && mkfs.ext4 -q /mnt/loops/parity00.img \
  && mkfs.ext4 -q /mnt/loops/parity01.img

at the top of my entry.sh i have:

sudo mount /mnt/loops/disk00.img /mnt/snapraid/disk00
sudo mount /mnt/loops/disk01.img /mnt/snapraid/disk01
sudo mount /mnt/loops/disk02.img /mnt/snapraid/disk02
sudo mount /mnt/loops/disk03.img /mnt/snapraid/disk03
sudo mount /mnt/loops/parity00.img /mnt/snapraid/parity00
sudo mount /mnt/loops/parity01.img /mnt/snapraid/parity01
sudo chown -R deno:deno /mnt/snapraid/disk*

so the first container run you can see that disk00 mounts fine:

$ df -h
Filesystem Size Used Avail Use% Mounted on

/dev/loop24 178M 216K 164M 1% /mnt/snapraid/disk00

and the rest fail:

Creating devcontainer_app_1 … done
Attaching to devcontainer_app_1
app_1 | mount: /mnt/snapraid/disk01: failed to setup loop device for /mnt/loops/disk01.img.
app_1 | mount: /mnt/snapraid/disk02: failed to setup loop device for /mnt/loops/disk02.img.
app_1 | mount: /mnt/snapraid/disk03: failed to setup loop device for /mnt/loops/disk03.img.
app_1 | mount: /mnt/snapraid/parity00: failed to setup loop device for /mnt/loops/parity00.img.
app_1 | mount: /mnt/snapraid/parity01: failed to setup loop device for /mnt/loops/parity01.img.

then if i stop and restart the container i get 1 more device to properly mount, disk00 and disk01:

$ df -h
Filesystem Size Used Avail Use% Mounted on

/dev/loop24 178M 216K 164M 1% /mnt/snapraid/disk00
/dev/loop25 178M 216K 164M 1% /mnt/snapraid/disk01

and the rest failing:

Creating devcontainer_app_1 … done
Attaching to devcontainer_app_1
app_1 | mount: /mnt/snapraid/disk02: failed to setup loop device for /mnt/loops/disk02.img.
app_1 | mount: /mnt/snapraid/disk03: failed to setup loop device for /mnt/loops/disk03.img.
app_1 | mount: /mnt/snapraid/parity00: failed to setup loop device for /mnt/loops/parity00.img.
app_1 | mount: /mnt/snapraid/parity01: failed to setup loop device for /mnt/loops/parity01.img.

as long as i run the container the same amount of times as requested mounts, ill end up with the required mounts inside the container. i can then run the container every time with everything mounting properly. once i reboot my pc, ill have to start the process all over again.

Just for fun, what happends if you put in the samme command separated with ; ?

you mean at the ends of the mount commands in entry.sh?

sudo mount /mnt/loops/disk00.img /mnt/snapraid/disk00;
sudo mount /mnt/loops/disk01.img /mnt/snapraid/disk01;
sudo mount /mnt/loops/disk02.img /mnt/snapraid/disk02;
sudo mount /mnt/loops/disk03.img /mnt/snapraid/disk03;
sudo mount /mnt/loops/parity00.img /mnt/snapraid/parity00;
sudo mount /mnt/loops/parity01.img /mnt/snapraid/parity01;

No, but I just realized it will just be the same as running them one by one.

my thought would be: mount /mnt/loops/disk00.img /mnt/snapraid/disk00; mount …
if you are allready root, maybe try and skip the sudos

ya im not root but the image user is sudo. i get the same thing when running as root.

i found this bit: permissions - How can I give access to loopback devices which are created dynamically to a Docker container? - Server Fault

turns out if i do:

$ sudo losetup -f
/dev/loop28
$ sudo mknod /dev/loop28 b 7 28
$ sudo mount /mnt/loops/parity01.img /mnt/snapraid/parity01

things mount fine.

1 Like

if i run with these:

volumes:
  - /dev:/dev
privileged: true
device_cgroup_rules:
  - b 7:* rmw

# cap_add:
#   - SYS_ADMIN
#   - MKNOD

i can run the standard/abstracted mount by itself:
sudo mount /mnt/loops/disk00.img /mnt/snapraid/disk00

if someone knows if i can run with specific capabilities i would appreciate it. using only SYS_ADMIN and MKNOD doesnt work. i need privileged for some reason.

help from: