Out of Memory Error in openjdk Container

I built a docker container for my jboss application. Container ran in AWS EKS Fargate. I found that OutOfMemory error occurred after the application ran for several minutes. However, it would be fine if the root user is used rather than the created user.

My Dockerfile:

FROM openjdk:8u342-oraclelinux8

ENV WILDFLY_VERSION 11.0.0.Final
ENV JBOSS_HOME /usr/local/jboss_api
ENV HOME /usr/local

RUN groupadd -r jboss && useradd -ms /bin/bash -l -r -g  jboss jboss \
    && chown jboss /usr/local

USER jboss

WORKDIR ${HOME}
COPY ./wildfly .
RUN tar -xvf wildfly-$WILDFLY_VERSION.tar.gz
RUN rm ./wildfly-$WILDFLY_VERSION.tar.gz
RUN mv ./wildfly-$WILDFLY_VERSION ./jboss_api \
    && mkdir ${JBOSS_HOME}/standalone/configuration/properties

WORKDIR ${JBOSS_HOME}
COPY ./script ./bin
COPY ./properties ./standalone/configuration
COPY ./configurations/standalone.conf ./bin/standalone.conf

COPY ./ROOT.war ./standalone/deployments/ROOT.war

WORKDIR ${JBOSS_HOME}
CMD ./bin/server.sh start

I figured out there is a memory issue for JVM. Therefore -Xmx2500m is set in java config. -XX:+UnlockExperimentalVMOptions and -XX:+UseCGroupMemoryLimitForHeap are also added by reading some reference articles but the problem still cannot be solved.

Anyone have an idea about this?

Thank you.

Check Docker support in Java 8 — finally! | by Grzegorz Kocur | SoftwareMill Tech Blog and make sure to use the -XX:InitialRAMPercentage and -XX:MaxRAMPercentage flags as well.

Thank you @meyay. For -XX:InitialRAMPercentage and -XX:MaxRAMPercentage, may I ask how to determine the number I should set for my application? As I cannot find many suggestions from the article you have shared.

I am not so sure anymore if we used InitalRamPercentage or not - it’s been a couple of years now since I have seen a non Spring Boot java application container. Spring Boot provides the maven build target spring-boot:build-image to create an optimzied image that uses the cloudfoundry memory calculator to determin the optimal memory usage.

If I remember right, with widlfly we used values between 75 and 85 (which afair is for heap only!) for *RAMPercentage, depending how many threads and off-heap memory the application required. This is highly application specific and required us to perform a lot of load testing to find the sweet spot. You might want to google for blog posts that discuss the Java memory model some.

Thank you again. Actually I am currently trying to containerized a current used wildfly application. It works perfectly for setting up the configurations like “-Xms2500m -Xmx2500m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m” before but caused trouble in docker after adding new user instead of using root user.

Moreover, I have read this article JVM Memory Settings in a Container Environment and tried to upgrade my java from jdk8 to open-jdk 11. However, the same ERROR is still exist…

What is “the same ERROR”? You need to share more details to create a picture of your situation in my head. I have no idea what you are using right now. I have no idea about your ressource contraints and no idea about the parameters your wildfly container ultimately uses.

But then again, propably I am too far away from the topic as it has been 3 years since I dealt with Wildfly on EKS Fargate. This is high likey not a usecase I will encouter in my projects anymore, as almost everything is shifted to Spring Boot, which already commes with a maven build target that creates images that don’t suffer from this problem.

Thank you for your time. I have come up with 2 possible ways to solve this problem, and just sent a case to AWS support, this is what I describe my case, hope it can help.


I am currently using EKS Fargate to deploy our Jboss application (Wildfly-11, jre-11 and Ubuntu 20.04 in use). Error occurred after the application ran for several minutes. By using ‘kubectl describe pod ’ command, it showed a warning message from the kubelet:

“Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec “7d9cc9f42efb61b816128409474406721961826b3d5c062efdaaac7f6f362758”: OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: resource temporarily unavailable: unknown “ .

Besides, server log from the jboss application showed “java.lang.OutOfMemoryError: Unable to create new native thread”.

After investigation, we have come up with 2 possible ways to solve the problem.

First, with reference to this article “How to solve java.lang.OutOfMemoryError: unable to create new native thread(link: http://www.mastertheboss.com/jbossas/monitoring/how-to-solve-javalangoutofmemoryerror-unable-to-create-new-native-thread/)” ), we figured out similar behavior of my application (ie. insufficient number of processes for users). We have tried to increase the max user processes by using “ulimit u 4096” command, or add a config file to increase the soft limit of nproc to 4096. However, the same error still exists.

Based on the user guide from EKS, the default nofile and nproc soft limit is 1024 and the hard limit is 65535 for Fargate pods. May I ask the correct way to change the soft limit of nproc?

Second, we suspected that the process id is not sufficient for the application. By using cat /proc/sys/kernel/pid_max, it returned ‘32768’, which is the max pid from the kernel. By using cat /sys/fs/cgroup/pids/pids.max, it returned ‘max’. Therefore, we would like to know what ‘max’ exactly means, and how to change this parameter?

As I would need to google to answer those questions, I will leave the googling part to you. Seems like this is something that should be done in an Init Container.

Since you alreay raised a support ticket, I would suggest to wait for the AWS support. Usualy their responses cover the relevant topics.