Docker Community Forums

Share and learn in the Docker community.

172.18.x/24 conflict changed to 172.19.x ERROR: no available IPv4 pool

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

Did a Docker Host setup on a new Centos OS.

My network uses the subnet: 172.18.0.1/24 but not on this VM so Docker didn’t know not to use this IP space.

After I did the Docker install, I noticed that the docker0 bridge was down because it had setup an IP.

I edited the subnet:
vi /etc/docker/daemon.json

{
“default-address-pools”:
[
{“base”:“172.19.0.0/24”,“size”:24}
]
}

Now I can ip link set docker0 up and all works.

Spun up two containers fine.

Issue is when I try to spin up something that has multiple containers such as RocketChat I get the error:

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

Followed efault instructions from:
https://docs.rocket.chat/installation/docker-containers
Although running on CentOS not Ubuntu host.

Error when I run the build:
curl -L https://raw.githubusercontent.com/RocketChat/Rocket.Chat/develop/docker-compose.yml -o docker-compose.yml
docker-compose up -d

everything is default install except that one 172.18 to a 172.19.

Do I need to add a Gateway?

I am a bit of a docker noob.
Any help you can provide is greatly appreciated.

Here is some additional info:

docker network ls
NETWORK ID NAME DRIVER SCOPE
7762ab0209a6 bridge bridge local
32ca5b0e588d host host local
283be9dac8b8 none null local

docker network inspect bridge
[
{
“Name”: “bridge”,
“Id”: “7762ab0209a67b7fba9061f388ba810410c71e5932182f79b39335171c483560”,
“Created”: “2020-10-14T12:56:08.751473739-07:00”,
“Scope”: “local”,
“Driver”: “bridge”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “172.19.0.0/24”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “”
},
“ConfigOnly”: false,
“Containers”: {
“27c991d9d443f754719a199a3ca8257aeb14ea63a4cd771d4f50cf54ead6f6cf”: {
“Name”: “determined_knuth”,
“EndpointID”: “ee041910ddc75049a5fda32a5c67d95136b815372ea563328bc6b1b224659d11”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/24”,
“IPv6Address”: “”
}
},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
},
“Labels”: {}
}
]

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:1a:4a:d4:4a:7c brd ff:ff:ff:ff:ff:ff
inet 10.130.1.56/24 brd 10.130.1.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::2f07:9388:d539:8144/64 scope link noprefixroute
valid_lft forever preferred_lft forever
44: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:df:58:d0:ff brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/24 brd 172.19.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:dfff:fe58:d0ff/64 scope link
valid_lft forever preferred_lft forever
48: veth2707adc@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 02:d4:75:ac:65:40 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::d4:75ff:feac:6540/64 scope link
valid_lft forever preferred_lft forever

docker-compose --version

docker-compose version 1.24.0, build 0aa59064

When docker networks are created (e.g. using docker network create or indirectly through docker-compose) without explicitly specifying a subnet range, dockerd allocates a new /16 network, starting from 172.N.0.0/16, where N is a number that is incremented (e.g. N=17, N=18, N=19, N=20, …). A given N is skipped if a docker network (a custom one, or the default docker bridge) already exists in the range.

You can specify explicitly a safe IP range when creating a docker bridge (i.e. one that excludes the host ips in your network) on the CLI. But usually bridge networks are created automatically by docker-compose with default blocks. To exclude these IPs reliably would require modifying every docker-compose.yaml file you encounter. It’s bad practice to include host-specific things inside a compose file.

Instead, you can play with the networks that docker considers allocated, to force dockerd to “skip” subnets. I’m outlining three methods below:

Method #0 – configure the pool of ips in the daemon config

If your docker version is recent enough (TODO check minimum version), and you have permissions to configure the docker daemon’s command line arguments, you can try passing --default-address-pool ARG options to the dockerd command. Ex:

allocate /24 subnets with the given CIDR prefix only.

note that this prefix excludes 172.17.*

–default-address-pool base=172.24.0.0/13,size=24
You can add this setting in one of the etc files: /etc/default/docker, or in /etc/sysconfig/docker, depending on your distribution. There is also a way to set this parameter in daemon.json (see syntax)

Method #1 – create a dummy placeholder network

You can prevent the entire 172.17.0.0/16 from being used by dockerd (in future bridge networks) by creating a very small docker network anywhere inside 172.17.0.0/16.

Find 4 consecutive IPs in 172.17.* that you know are not in use in your host network, and sacrifice them in a “tombstone” docker bridge. Below, I’m assuming the ips 172.17.253.0, 172.17.253.1, 172.17.253.2, 172.17.253.3 (i.e. 172.17.253.0/30) are unused in your host network.

docker network create --driver=bridge --subnet 172.17.253.0/30 tombstone

created: c48327b0443dc67d1b727da3385e433fdfd8710ce1cc3afd44ed820d3ae009f5

Note the /30 suffix here, which defines a block of 4 different IPs. In theory, the smallest valid network subnet should be a /31 which consists of a total of 2 IPs (network identifier + broadcast). Docker asks for a /30 minimum, probably to account for a gateway host, and another container. I picked .253.0 arbitrarily, you should pick something that’s not in use in your environment. Also note that the identifier tombstone is nothing special, you can rename it to anything that will help you remember why it’s there when you find it again several months later.

Docker will modify your routing table to send traffic for these 4 IPs to go through that new bridge instead of the host network:

output of route -n

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth1
172.17.253.0 0.0.0.0 255.255.255.252 U 0 0 0 br-c48327b0443d
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Note: Traffic for 172.17.253.{0,1,2,3} goes through the tombstone docker bridge just created (br-c4832…). Traffic for any other IP in the 172.17.* would go through the default route (host network). My docker bridge (docker0) is on 172.20.0.1, which may appear unusual – I’ve modified bip in /etc/docker/daemon.json to do that. See this page for more details.

The twist: if there exists a bridge occupying even a subportion of a /16, new bridges created will skip that range. If we create new docker networks, we can see that the rest of 172.17.0.0/16 is skipped, because the range is not entirely available.

docker network create foo_test

c9e1b01f70032b1eff08e48bac1d5e2039fdc009635bfe8ef1fd4ca60a6af143

docker network create bar_test

7ad5611bfa07bda462740c1dd00c5007a934b7fc77414b529d0ec2613924cc57

The resulting routing table:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth1
172.17.253.0 0.0.0.0 255.255.255.252 U 0 0 0 br-c48327b0443d
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-c9e1b01f7003
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7ad5611bfa07
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Notice that the rest of the IPs in 172.17.0.0/16 have not been used. The new networks reserved .18. and .19… Sending traffic to any of your conflicting IPs outside that tombstone network would go via your host network.

You would have to keep that tombstone network around in docker, but not use it in your containers. It’s a dummy placeholder network.

Method #2 – bring down the conflicting bridge network

If you wish to temporarily avoid the IP conflict, you can bring the conflicting docker bridge down using ip: ip link set dev br-xxxxxxx down (where xxxxxx represents the name of the bridge network from route -n or ip link show). This will have the effect of removing the corresponding bridge routing entry in the routing table, without modifying any of the docker metadata.

This is arguably not as good as the method above, because you’d have to bring down the interface possibly every time dockerd starts, and it would interfere with your container networking if there was any container using that bridge.

If method 1 stops working in the future (e.g. because docker tries to be smarter and reuse unused parts of an ip block), you could combine both approaches: e.g. create a large tombstone network with the entire /16, not use it in any container, and then bring its corresponding br-x device down.

Method #3 – reconfigure your docker bridge to occupy a subportion of the conflicting /16

As a slight variation of the above, you could make the default docker bridge overlap with a region of 172.17.. that is not used in your host network. You can change the default docker bridge subnet by changing the bridge ip (i.e. bip key) in /etc/docker/daemon.json (See this page for more details). Just make it a subregion of your /16, e.g. in a /24 or smaller.

I’ve not tested this, but I presume any new docker network would skip the remainder of 172.17.0.0/16 and allocate an entirely different /16 for each new bridge.