Docker --mount type=tmpfs memory usage


I am writing because I am wondering if I am wrongly interpretating the docker online documentation. Thus I’d like to understand. Here is a docker run call

docker run -it --rm --memory=5000m --memory-swap=5000m --mount type=tmpfs,destination=/app,tmpfs-size=25000m ubuntu:22.04 /bin/bash

Inside the container, I can see the 25000m allocated to the /app mount point - this is the partial output of the df -h.

tmpfs                           25G     0   25G   0% /app

This is consistent with the documentation tmpfs mounts | Docker Docs

But I am not able to write more than 5000m on /app (I tested this with dd by writing more files). Apparently, this seems to be not consistent with documentation. Indeed neither the previous link nor Runtime options with Memory, CPUs, and GPUs | Docker Docs report any interaction among --memory --memory-swap and --mount type=tmpfs

Apparently, it seems that docker is not able to distinguish between the ram allocated (5000m ) and the tmpfs. And when the tempfs reaches the --memory --memory-swap limit, the container is killed. But is this the expected behaviour?

Docker version 20.10.14, build a224086

Wrapping up… I was expecting to be able to write up to 25000m to the /app mount point with a container instantiated in this way

docker run -it --rm --memory=5000m --memory-swap=5000m --mount type=tmpfs,destination=/app,tmpfs-size=25000m ubuntu:22.04 /bin/bash

But I was not able to go beyond the limit imposed by --memory=5000m --memory-swap=5000m (even if this is not clear from the online documentation)

Specifically I am not able to execute more than this (1) dd if=/dev/zero of=/app/zero_1.img bs=1G count=2 — (2) dd if=/dev/zero of=/app/zero_2.img bs=1G count=2 . Thus I am not able to write more than 3.9G

for transparency the same question is reported also here docker --mount type=tmpfs memory usage - Stack Overflow
I’ll update / close the thread If I get any meaningful solution

1 Like

Well, I will be honest. I’m not sure whether it should be handled as two separate limits as I never configured tmpfs and memory limit together, but tmpfs is also in memory, so if the container can’t access more memory, it could be logical that it can’t access that memory as a tmpfs either.

If it is not the reason, then I don’t know what.

thank you very much for your answer! This is the “empirical” evidence.

My question is if that is the expected behaviour.

  • Because if it is so, then maybe a warning for the user and more explicit documentation would be very appreciated (i.e. some text that warns the user about this!) indeed, from the online documentation I am not able to understand this. Moreover, I got the same issue also by mounting a tmpfs generated outside the docker run command
  • Whereas if that is not the expected behaviour then there is a problem with the way the docker daemon accounts for the memory occupation

thank you again!

I agree, but we can’t change the documentation. If it turns out to be a problem with the docs, you can ask for a modification here:

Docker CE issues can be reported here:

You could actually open an issue and let the developers decide if it is a bug or it should be handled as a feature request and add a warning

Dear @rimelek your feedback is indeed very appreciated.
I think I am going to open an issue first and then, If it turns out to be an expected behaviour, I will propose an update of the docs (at least to warn future users)
I’ll update the results of this process here.

issue reported here Docker –mount type=tmpfs memory usage · Issue #47967 · moby/moby · GitHub