"No space left on device" but inodes and diskspace is available

I have problems to copy ~4 million files into a named volume.

  • -Host system is ubuntu16.04 with docker-CE 17.09.0~ce-0~ubuntu installed.
  • -Wrote a dockerfile based on ubuntu:16.04 image for running a python3 application.
  • -Using OverlayFS (overlay2) as storage engine.
  • -Also tried using aufs with similar result.
  • -The python script tar copies 3.9M thumbnails (95GB size).
  • -Source folder is a bind mounted host folder /dataset, destination folder /assets is a named volume.

Codesnipped that is running:

subprocess.Popen(['bash', '-c','(cd /dataset/imgs/; tar cf - . ) | (cd /assets/; tar xf - )'],stdout=subprocess.PIPE)
stdout, stderr = tar.communicate()

It stops writing files after a while and repeats the following error:

tar: ./imgs/img21342130.jpg_ORIG_resize_300x300: Cannot open: No space left on device
tar: ./imgs/img79179403.jpg_ORIG_resize_300x300: Cannot open: No space left on device
.
.
.

When it stucks the named volume has the following content:

Volume dataset_assets-storage with size 63G.

Contains:
2595307 	files
5 		folders

When I enter the docker container (docker exec -it HASHID bash) I get the following internal view:

Diskspace:

root@87ec20fc4159:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         917G  494G  377G  57% /
tmpfs            64M     0   64M   0% /dev
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/nvme0n1p1  917G  494G  377G  57% /assets
shm              64M  4.0K   64M   1% /dev/shm
tmpfs           7.9G     0  7.9G   0% /sys/firmware

Inode count:

root@87ec20fc4159:/# df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
overlay        61054976 9892624 51162352   17% /
tmpfs           2044850      16  2044834    1% /dev
tmpfs           2044850      16  2044834    1% /sys/fs/cgroup
/dev/nvme0n1p1 61054976 9892624 51162352   17% /assets
shm             2044850       2  2044848    1% /dev/shm
tmpfs           2044850       1  2044849    1% /sys/firmware

From the host system I get:

root@gpuslave:~# df -i /
Dateisystem      Inodes IBenutzt    IFrei IUse% Eingehängt auf
/dev/nvme0n1p1 61054976  9892785 51162191   17% /

root@gpuslave:~# df -h /
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/nvme0n1p1  917G    509G  362G   59% /

All other posts I found with a similar error either had a lack of inodes or diskspace. This is not the case here. What else can cause this? Has OverlayFS or AUFS inode limits I can set? Since the host system has inodes and space available I guess the union fs used is the cause. I used both storage engines with similar results. What else can I try? How can I debug this problem further?

I reproduced the issue again, and while the tar copy process was running and repeated the error I was able to create files via console and touch command (entered the running container with “docker exec -it ID bash” ) in /assets.

Then I replaced the tar copy command by:

subprocess.Popen([‘cp’, ‘-R’,‘/dataset/imgs/’, outdir ],stdout=subprocess.PIPE)

And it works fine now, so this issue is solved. But I’m curious what exactly has gone wrong with the tar pipe copy. Has anyone an idea or intuition what is wrong with tar pipe copy?