Docker Community Forums

Share and learn in the Docker community.

Odd memory problem with Java on Ubuntu container

(Lmeadors) #1

I’ve got a process that is a Java app that runs another Java app. Most of the time, this works great - but some times (<1% of the time), the second app will OOM hard (hotspot crashes and creates a log file, not just getting an OOM exception thrown).

Re-running the app with the same data on ECS will continue to fail consistently.

The log file has this in it (along with a stack trace saying “mmap failed for CEN and END part of zip file” and it appears to be something related to java.util.logging.Logger?

Java HotSpot(TM) 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f890618b000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# An error report file with more information is saved as:
# path here...

I’ve tried all manner of things - increasing (and decreasing) the Xmx value. The combined values for the JVMs and the container are now less than half of the total memory available on the EC2 instance, so I’m thinking this is less about the amount of memory needed that it is about the methods used to allocate it.

If I run the same docker container on my desktop (OSX) with the same memory settings, it works 100% of the time.

I’ve been tinkering with this for a couple of weeks and have tried everything I can think of. :expressionless:

Anyone have any ideas?

(Tk6022) #2

Did you ever resolve this? Seeing something similar… The JVM itself is running out of memory rather than the heap, but the instance has plenty of memory left as far as i can tell…

(Lmeadors) #3

Yes - for me the problem was related to the configuration of the host machine.

I added these settings to it’s userdata script and the issue was resolved:

sudo sysctl -w fs.file-max=2097152
sudo sysctl -w vm.max_map_count=67108864

The default values on the instance were really conservative - these were much better for my use.

Don’t ask how I got to those - I spent days searching and trying stuff from SO and here before finally finding something that helped. :slight_smile:

(mani) #4

Can we apply:

sudo sysctl -w fs.file-max=2097152
sudo sysctl -w vm.max_map_count=67108864

(with or without sudo)

inside the container instead? Would that do the same, or does it have to be outside the container on the host?