Docker resource usage & overhead

One of our Docker machines is currently running 183 containers.

The dockerd process on this machine (by itself) has a virtual size of 5.3GiB. It’s tempting to say “Oh, that’s just virtual address space, it isn’t really used.” But it is. The system memory reflects it and once that’s exhausted, it digs into swap. At this point, all but 42MiB of dockerd has been swapped out. (And yes, this is real usage, not unused allocations; the pages are almost all dirty and enough swap space is consumed to fully back this allocation.)

A docker-containerd process has a further 3.6GiB of virtual size, of which 4MiB is resident, and the rest is (again) swap-backed.

The docker-containerd process referenced here has the command line:

docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc

This appears to be some sort of “system” container; it isn’t one of ours, and all of the rest of the docker processes are children of this one.

All told, this system has 12GiB of RAM and 12GiB of swap. Of that 24GiB, about 18GiB is used (e.g. the machine is about 6.5GiB into swap). Pretty much that whole 6.5GiB appears attributable to these two processes.

By some back-of-the-envelope calculations, about 57% of that 18GiB, or about 10GiB, is attributable to processes with “docker” in the name.

For 183 containers, this seems excessive. The RAM usage does appear to be roughly linear with the number of containers. It happens immediately after containers start and stays roughly static after that so it doesn’t appear to be a gradual leak.

This is Docker 1.12.1 on Ubuntu 16.04 LTS, but this behavior has been observed on each of the handful of Docker servers we set up over the course of many versions.

This isn’t the “Lightweight, uses less RAM!” that docker.com talks about, and at the same time, it doesn’t seem like this is a common complaint.

So, is this level of overhead expected, just something we have to plan for? (Mainly by adding tons of extra swap space.) Or is it (more likely) that we’re doing something dumb to cause this?

Thanks for any advice!

This type of resource usage seems normal (although I’m not saying it’s ideal, just not unexpected). Partially, the Golang (language Docker is written in) runtime’s virtual memory usage is quite high by default due to pre-allocated stack space (Understanding Golang Memory Usage – Defer Panic) and Docker tends to spin off a lot of goroutines.

183 containers definitely seems like a lot for one machine, though it’s not unheard of – what are you running in them?

Just because I’m curious, could you explain the steps / commands you used to divine this information?

OK, if that’s to be expected, we’ll just expect it from now on and crank up the swap space accordingly.

The containers themselves are all very lightweight. Combined, they use about 10.5GiB of the 12GiB of RAM and a couple of CPU hours per day.

Measurement just a matter of keeping very straight what the various different types of memory usage are and what they mean. ps, for example, has something like 5-6 different option values that all show different “memory” stats: virtual, resident, shared, writeable. Plus, awesome stuff like vsz and vsize are synonyms, but sz and size are not. :stuck_out_tongue:

“ps -C dockerd -o vsz,size,sz,rss” gives about half of the information in one command, although some of it is very back-of-envelope best-guess quality.

“top” can also give information about how much swap space a process is using, although as with ps it isn’t shown by default. (And I think as with ps much of it is “best guess.”)

But there’s really no substitute for doing a kill -9 and measuring /proc/meminfo before and after. Not a good move on a production system though; I seem to recall we had to reboot to recover that. :slight_smile: