I am running Docker 1.10 and from what I can tell since over a year ago, Docker seems to run out of memory very quickly when running Java. In my case I am running a Docker with Java (java rest api using spring/jetty) in a Droplet on digital ocean. I havent tried this locally yet, but I only have 1GB RAM on the droplet, but I specify 512MB for the JVM. Just sitting idle, within minutes it crashes with OOM.
There were some posts about a possible fix by exporting MALLOC_ARENA_MAX=4, but that doesnt seem to solve it for me. Apparently this wasnt an issue in older versions, but the earliest I can find it related to is Docker 1.7.1. I personally dont want to go back to 1.6.x if I can avoid it.
Would love to know if others have experienced this, solved it or if the Docker team is aware of a potential memory leak.
Oh, I have to reboot my droplet… even stopping/removing the java docker container does not return the memory to the droplet. I run another docker container as well and that one crashes soon after with OOM because there is no memory left to do anything in the system.
While it’s true that the Docker daemon can be memory intensive if running a lot of containers at once, Java processes are in a whole class of their own when it comes to memory usage. The bottom memory limit recommended for running Docker effectively is usually cited at around 2GB. Java will eat up a lot of memory on top of that (multiple GBs). See how a 2GB server treats you.
I’m sure we’d be quite keen to learn more about the memory usage of the daemon via pprof or otherwise – seems likely there’s plenty of room for improvement since I’m not aware of any ongoing memory usage optimization efforts.
I am able to run the JVM in 512MB RAM just fine locally, and was able to run Jetty with Java in a 1GB Docker image some time back no problem. We didnt load large objects in to memory, and what we had ran plenty good.
That said, there is a known issue with regarding JVM and Docker right now that seems to crash the Docker instance. From my understanding it has something to do with the JVM GC process releasing HEAP but docker not releasing the memory back to the underlying host, instead growing until it crashes. There are several forum posts on the topic, with one solution supposedly being to use what I said above… although that didnt work for me.
As a sole developer looking to try to get a service in the cloud, I am but a poor fellow who cant afford 2GB to 4GB nodes in the cloud… they are a bit above my budget right now. I dont think the answer is to throw bigger hardware at the issue though. I know very well that my service runs just fine with < 512MB RAM in the JVM and I have a 1GB Ubuntu 14.04 machine with Java 8 on it, it should be able to handle the very tiny amount of processing my service does (currently).
Interesting… I’d suggest filing issue to https://github.com/docker/docker/issues/new with a minimally reproducible example if possible. The maintainers might be able to collect pprof information etc. to help debug why Docker and Java together use so much memory.
I have reported a OOM and Garbage collector issue in
Docker Linux + Java: Issues on Garbage Collecting cause Memory Leak and OOM