[Solved] Containers can't connect to each other when having 32+ up

Hi everyone,

I am running Docker on Windows for development. Everything is fine until I startup more than 32 containers.

A couple of projects are connecting to each other (project > API > microservice) which is working fine until I start my 33st container. The containers can’t connect to others anymore at that point.

Important notice here;
I can’t connect to other services through the hostname. It can connect to services in the project but not outside the “default” project network. Hard to describe, I know.

My environment

  • Windows 10 pro
  • Docker CE 17.12.0-ce-win47 (15139)
  • Compose 1.18.0
  • 2 CPU assigned
  • 10GB memory assigned

I understand that it could be I ran into limits, but I tested it with assigning more CPUs and memory, but that doesn’t matter. It also doesn’t matter which projects I start, 32 containers is the limit on my machine.

Does anyone have an idea where I can look to solve this?

Many thanks in advance.

Kind regards,
Bert Oost

A docker-compose project creates a private network by default. Services in the project are containers on that private network. They cannot reach services or containers on other networks, which means other projects. This is true regardless of the number of containers running.

So, I don’t get what is working fine for up to 32 containers. If you can clarify that part, I may be able to help you.

I tried spinning up 45 containers on my laptop, with 8GB RAM assigned. Networking works as expected.

Thanks @rajchaudhuri

My projects are exposing a hostname via Traefik (as a container). Let’s say project A has “project-a.dev” as hostname, and project B has “project-b.dev” as hostname.

I can access both projects via my browser, but A cannot connect to B (via cURL or Guzzle) when I have more than 32 containers up and running. While I can access them via my browser.

via hostname?

how about via ip address?


While I can access them via my browser.

what url address are u you using for the browser access? localhost? via the mapped port?

No I am using hostnames, via Traefik, which maps hostnames to the containers.

The hostnames doesn’t matter for the case, the issue is that they can’t find the other one when I up more than 32 containers. When I have 32 or less containers they can.

thanks… don’t know what that is…

i was just trying to get clarifying info…

but for the containers, A to B is also using hostnames? or IP address?
if hostname, did u try IP address?

so we can tell if its a network thing, or a name thing?

Traefik manages to map hostnames to the right container IPs (frontends vs backends).

The weird thing is, when I can’t access the names internally from A to B, I can access them individually via my browser. So the names are still mapped to the right containers, otherwise I couldn’t connect via the browser either

That’s because when you access the names via your browser, the request goes to the port exposed by Traefik, and Traefik maps the names to the container addresses. When a container tries to reach another container, the request does not reach Traefik at all.

The correct way for a container to reach another container is to use a container name or alias, set up at creation time. When you define a service in docker-compose, this is what happens. You can reach the container of the service from another by using the service name or alias.

Going back to your example, can A connect to B via curl or guzzle if there are <=32 containers?

Yes. Then it works perfectly.

I understand connecting on container name or IP always works.

Unfortunate my colleague has this issue with 8 containers instead of 32.

I investigated the option to connect based on container-name or IP, but that’s very hard. Since we don’t add names to our containers, so we’re able to scale.
And therefor; in non-development environments we’re using Kubernetes to manage our deployments. We don’t know the end node and IP or internal-hostname.

I found the problem…
After upping a couple of projects, which creates a number of networks (default), docker is creating a new network with a wrong IP-range. This one is also one of our local network infrastructure and probably causing an IP-conflict.

See this screenshot;

When I down that project, the problem doesn’t exist

In above image the problem is a network with a conflicting IP-address.
We solved it to remove the default networks from every individual project and assigning all to one default (development) network.
This is working because there is only one network for our development projects. Which is much cleaner too.

Thanks for updating this. I tried to reproduce the problem many times, but it never happened for me.