Docker container runs differently on my local then it does in AWS

Expected behavior

Send email

Actual behavior

Throws Exception in thread “Thread-2” java.lang.NoClassDefFoundError: jakarta/activation/DataContentHandler at

Additional Information

Running the container in docker desktop on windows 11 sends email.

Steps to reproduce the behavior

Start the container as a Fargate service
Connect to running service
Attempt to send email
Class not found exception thrown by application

So, after reading that, do you think I can just reproduce the error?

What does your dockerfile/compose look like? What do you use to run the container?

Absolutely not… LOL. My hope was to identify some high level options for where to start my hunt for this issue because I expect a docker container to run the same in docker desktop on windows 11 as it does in an AWS ECS container using launch type FARGATE.

I create the baseline image “FROM amazoncorretto:11-al2-jdk” using the official tomcat docker file: tomcat/9.0/jdk11/corretto-al2/Dockerfile

I build the image I plan to run from my baseline image and copy application war files and config files to their prospective locations.

I run the container in docker desktop on my local windows 11 pc and the software works as expected.

I tag the container for my AWS registry and push it to a private container repository.

I Build out the ECS service to execute the container. Perform the same test case and receive this application error.

At this point I suspect an application dependency issue as I have found two different versions of the DataContentHandler.class in the war package, but I’m really not sure why I can’t replicate the issue on my local.

How exactly do you deploy each instance? Perhaps there’s a slight difference in configuration or environment that may be the cause

On my local I just use the docker command to run the container. In amazon I use a task definition that points to the image in my repository. Here is an snip from the JSON description of my task:

    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "512",
    "memory": "1024",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    },

I have no doubt the environments are different one is local windows pc and the other is an AWS serverless compute infrastructure.

You tag the image, not the container, so your container could work locally and the other container still fail from the same image, because some files are not mounted for example. That si why it is important to see what you are doing exactly. Without that, people can spend hours on guessing which is not something that everyone volunteers to.

Maybe you are right and you do everything the exact same way and there is some kernel or cpu dependency which could be different on each machine. So the statement you can read often like “containers run the same way everywhere” is not true. It just makes it less likely to fail usually. I had a strange issue once when one machine could build the image and run the container, the other could not build and not even run the container based on an already built image. Then it required changing the application. We added a dependency related to CPU optimization, but I don’t remember what it was and frankly I don’t even remember what programming language we used.

I am not even sure if ECS still uses Docker or already moved over to contained as runtime.

But ECS Services are not vanilla docker. It is neither Docker Compose, nor Docker Swarm, though but none of the differences should result in missing class definitions.

It sounds like there is a library collision, as if some classes are taken from one library, and some from another. Make sure your dependency tree is free of conflicts, and you build an image with a new tag before testing to eliminate re-using existing old images form the local image cache.

My thoughts exactly, so I spent the weekend cleaning up this legacy web app by moving from a custom ant build to maven. All app dependencies are mapped in the pom file and removed possible collision jars that were installed on the server. Got the app running and tested on my local development environment. All looked good so I built the image with this new .war file and it ran as expected on my local docker engine. Pushed my image to the AWS registry and redeployed. Got the same class not found error when trying to send an email.

Looks like I will have open a support case with amazon to see if it is something I am doing wrong or if it’s their docker engine. At this point I’m not confident in migrating legacy apps to ECS. FYI… All of our newer SpringBoot apps run without issue.

Well, I had some residual jars left over that I missed, and once I removed those all it right with the world!!! Thanks for your thoughts and ideas to all. Still kind of crazy that the container runs differently on a windows docker engine versus a Linux engine, but have to say this wouldn’t be the first time this has happened to me in my career.

Are you sure its lagacy vs spring boot? Or might the difference be, that the spring boot images are build using pipelines with untainted build environments, such as job containers that get discarded after the pipeline job finishes, so the next execution of the job starts with a clean state.

In my experience most things that run on docker, also run fine on ecs. It is true though that some things are more cumbersome with ecs, like service discovery, enabling exec for the service tasks, or mounting efs volumes.

This usually helps to get an idea which transitive dependencies lead to a collision:

mvn dependency:tree

Then you start to exclude offending versions from dependencies, so you hopefully end up with a collision free dependency graph. Combined with untainted build environments, it should work hassle-free.

This was an old ant build script, but I spent this weekend porting everything over to maven and it made it easier to identify the redundant jars for sure.

1 Like