Docker Community Forums

Share and learn in the Docker community.

Shared memory written to disk?


#1

I have an application running in a docker container that creates a rather large shared memory array (created in Python using RawArray). It seems docker is writing this array to disk. Why is that and how can I avoid it?

When the container is started:

root@0ac9e43d796d:/workspace# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 45G 15G 31G 33% /
tmpfs 64M 0 64M 0% /dev
tmpfs 36G 0 36G 0% /sys/fs/cgroup
/dev/sda1 45G 15G 31G 33% /rl
shm 70G 0 70G 0% /dev/shm
tmpfs 36G 12K 36G 1% /proc/driver/nvidia
tmpfs 36G 4.0K 36G 1% /etc/nvidia/nvidia-application-profiles-rc.d
tmpfs 7.1G 18M 7.1G 1% /run/nvidia-persistenced/socket
udev 36G 0 36G 0% /dev/nvidia0
tmpfs 36G 0 36G 0% /proc/acpi
tmpfs 36G 0 36G 0% /proc/scsi
tmpfs 36G 0 36G 0% /sys/firmware

After the shared array has been created:

root@0ac9e43d796d:/workspace# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 45G 42G 3.8G 92% /
tmpfs 64M 0 64M 0% /dev
tmpfs 36G 0 36G 0% /sys/fs/cgroup
/dev/sda1 45G 42G 3.8G 92% /rl
shm 70G 636K 70G 1% /dev/shm
tmpfs 36G 12K 36G 1% /proc/driver/nvidia
tmpfs 36G 4.0K 36G 1% /etc/nvidia/nvidia-application-profiles-rc.d
tmpfs 7.1G 18M 7.1G 1% /run/nvidia-persistenced/socket
udev 36G 0 36G 0% /dev/nvidia0
tmpfs 36G 0 36G 0% /proc/acpi
tmpfs 36G 0 36G 0% /proc/scsi
tmpfs 36G 0 36G 0% /sys/firmware


#2

Answering my own question.
It seems the problem is not Docker but Python. From Python 3.5 shared arrays in Linux are created as temp files mapped to memory (see https://bugs.python.org/issue30919).
To force Python to use shared memory a workaround is to execute these two lines of code:

from multiprocessing.process import current_process
current_process()._config[‘tempdir’] = ‘/dev/shm’