Docker Community Forums

Share and learn in the Docker community.

Connection to docker container gets lost if concurrent requests are made

i have a mongodb container that can be created using the following command:

docker run  --ulimit nofile=64000:64000 -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password -p 27017:27017 --name test-mongo mongo

I’m able to connect to mongodb without problems but every time i issue many concurrent requests i get the error “connection refused”. While it is still possible to connect to the container using “docker exec” the port is no longer visible from host using netstat.

I have tested the setup on 2 more computers running ubuntu 18.04 and the issue is not present on those. To complicate matters further, when the mongo container is created together with mongo-express using docker-compose file, mongo-express is able to access mongodb even after the port disappears from the host machine.

I have a feeling the problem is with my linux host but i can’t figure out the problem.

So, it wasn’t a problem with either Docker or Elastic. Just to recap, the same script throwning PUT requests at a Elasticsearch setup locally worked, but when throwning at a container with Elasticsearch failed after a few thousand documents (20k). To note that the overal number of documents was roughtly 800k.

So, what happend? When you setup somethig running on localhost and make a request to it (in this case a PUT request) that request goes through the loopback interface. In pratice ths means that no TCP connection gets created making a lot faster.

When the docker container was setup, ports were bound to the host. Although the script still makes requests to localhost on the desired port, a TCP connection gets created between the host and the docker container through the docker0 interface. This comes at the expense of 2 things:

the time to setup a TCP connection
TIME_WAIT state
This is actually a more realistic scenario. We setup Elasticsearch on another machine and did the exact same test and got, as expected, the same result.

The problem was that we were sending to requests and for each of them creating a new connection. Due to the way TCP works, connections cannot be closed immediately. Which meant that we were using all available connections until we got none to use because the rate of creation was higher the actual close rate.

Three suggestions to fix this:

Pause requests every once in a while. Maybe put a sleep at every X requests making possible for the TIME_WAIT to pass and the connection closing
Send the the Connection: close header: option for the sender to signal that the connection will be closed after completion of the response.
Reuse connection(s).

Reinstalling docker solved the problem