Networking after changing IP addresses

I have two docker containers that can communicate with each other in the default IP range 172.17.0.0/16. One of them publishes port 80 to port 8081. This subnet overlaps with the subnet the host is on, so I tried to change the IP range that the containers are hosted upon. I set “bip” in “daemon.json” to “192.168.0.1/16”, and it changed the IP addresses of the two containers, but the port forwarding no longer works. The port 8081 is still open on the host, and the port 80 is open inside the container, but the traffic doesn’t seem to be going between them. The browser just hangs indefinitely. So does wget. Port forwarding resumes when “bip” is changed back to 172.17.0.1/16 (simply removing it didn’t change the addresses back, but that’s probably normal). I’ve tried restarting the containers, the docker daemon, and the host machine. Is this normal? Do I need to recreate the containers to have the ports forwarded to the new subnet?

Thank you.

wget responded:

HTTP request sent, awaiting response… Read error (Connection reset by peer) in headers.
Retrying.

and Safar reported:

Safari can’t open the page because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then try again.

The virtual machine of Docker Desktop has its own subnet which is somewhere in 192.168.x.x. So you can’t set the container network to overlap with it either.

That actually shouldn’t matter unless you want to send direct requests to the host’s IP address. I configured my Docker Desktop on Mac to use 192.168.100.0/24 which was my LAN subnet. I did i to test if I can break something. I couldn’t and I forgot to change it back to the default which was 192.168.65.0/24. I didn’t even notice it.

Thank you for responding. I’m sorry I disappeared; with the weekend and other things, I wasn’t available.

The experience you describe is interesting. I wonder how that worked. What I believe is happening to me is:

  1. Inside the container, the name is resolved from company.enterprise.server to 172.17.120.xxx
  2. The bridge network had an IP range rom 172.0.0.0 to 172.255.255.255 (or something like that)
  3. The routing table inside the container determines that IP 172.17.120.xxx must be on the local subnet and is not sent to the gateway at 172.0.0.1, but “directly” to a server that doesn’t exist at 172.17.120.xxx

It’s hard to understand how anything would be able to distinguish between a server on the company enterprise private subnet, and the docker-host-only private subnet.

Docker desktop has an option “Copy docker run”, so I recreated the container and the new one worked. I did a diff on the inspection report and it looks like the port binding listed the HostIp on the container that didn’t work as “0.0.0.0” and the container that did work as “”. It turns out I can connect to the container if I navigate to the hostname of my laptop:8081, but not localhost:8081, because the port forwarding is only bound to the external interface. I don’t know what I did that caused the change in bind setting, but I’m on a Mac and can’t get to the configuration files easily, so I’m not going to edit the container. But I can live with this, I suppose.

Thank you for your help. It definitely pointed me in the right direction.