Cannot create network (br-....) conflicts with (br-...): networks have overlapping IPv4

I have a networking problem related to docker-compose. I have a docker-compose project that creates two networks: one default network and one for a subset of the containers. One of the containers in the non-default network needs to be associated with a specific IP address, which I do by creating a bridge network with a particular subnet.

This all used to work until this afternoon, and I’m no longer able to launch the project - the bridge network appears to persist between docker daemon restarts, and are recreated after a system reboot. I’ve tried deleting the networks via docker (they are not found), purging the networks NetworkManager, and rewriting iptables rules. Even if the networks appear to be deleted, docker recreates them on startup, and I’m unable to launch the docker-compose application.

As you can see in the output below - docker doesn’t see any networks besides the default local networks, yet the bridge (specifically with a fixed subnet) created by docker-compose are still around and cannot be cleaned up.

Any help would be appreciated here.

$ docker version
Client:
 Version:           20.10.3_ce
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        46229ca1d815
 Built:             Sun Feb 14 00:00:00 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.3_ce
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       46229ca1d815
  Built:            Sun Feb 14 00:00:00 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.1.5_catatonit
  GitCommit:        
$ docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.3_ce
 Storage Driver: btrfs
  Build Version: Btrfs v5.9 
  Library Version: 102
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux oci runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.11.2-1-default
 Operating System: openSUSE Tumbleweed
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.29GiB
 Name: x1carbon
 ID: 4PR7:K6DU:4HX2:QZ3L:CFNI:UK6B:VOO6:BBWA:Y4UG:JLVI:A67X:YXGB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Default Address Pools:
   Base: 169.254.170.0/24, Size: 24
   Base: 172.81.0.0/16, Size: 24
$ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          0         0         0B        0B
Containers      0         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     0         0         0B        0B
$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
73e701c79222   bridge    bridge    local
0dad5075b794   host      host      local
30942a07e36b   none      null      local
$ ip a
<snip>
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:18:50:e5:3e brd ff:ff:ff:ff:ff:ff
    inet 172.80.0.1/24 brd 172.80.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:18ff:fe50:e53e/64 scope link 
       valid_lft forever preferred_lft forever
9: br-01792762f10b: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:81:51:a8:80 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.1/16 brd 172.21.255.255 scope global br-01792762f10b
       valid_lft forever preferred_lft forever
11: br-f2ae1ce631d7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:35:93:1a:20 brd ff:ff:ff:ff:ff:ff
    inet 169.254.170.1/24 brd 169.254.170.255 scope global br-f2ae1ce631d7
       valid_lft forever preferred_lft forever
$ docker-compose up
WARNING: The AWS_ACCESS_KEY_ID variable is not set. Defaulting to a blank string.
WARNING: The AWS_SECRET_ACCESS_KEY variable is not set. Defaulting to a blank string.
WARNING: The AWS_SESSION_TOKEN variable is not set. Defaulting to a blank string.
Creating network "rdbms-ingest-spark_credentials_network" with driver "bridge"
ERROR: cannot create network f0ee4af724c0bd15f87e440ffaa2a0cd15cff45b04a17eb0d0e88569e1541e18 (br-f0ee4af724c0): conflicts with network f2ae1ce631d716b756ee10accfb4cd8d19018d0243afb354f8896c37092b98a9 (br-f2ae1ce631d7): networks have overlapping IPv4

As you can see in the docker-compose*.yml files below, I’m trying to create a network with subnet
169.254.170.1/24 and a single container with the 168.254.170.2 IPv4 address.

# docker-compose.yml
version: '2'

volumes:
  mariadb:
  spark:

services:
  spark-master:
    #build: .
    image: docker.io/bitnami/spark:3-debian-10
    user: root
    volumes:
      - spark:/data
    environment:
      - SPARK_MODE=master
      - SPARK_RPC_AUTHENTICATION_ENABLED=no
      - SPARK_RPC_ENCRYPTION_ENABLED=no
      - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
      - SPARK_SSL_ENABLED=no
    ports:
      - '8080:8080'
      - '4040:4040'

  spark-worker:
    image: docker.io/bitnami/spark:3-debian-10
    user: root
    volumes:
      - spark:/data
    environment:
      - SPARK_MODE=worker
      - SPARK_MASTER_URL=spark://spark-master:7077
      - SPARK_WORKER_MEMORY=1G
      - SPARK_WORKER_CORES=1
      - SPARK_RPC_AUTHENTICATION_ENABLED=no
      - SPARK_RPC_ENCRYPTION_ENABLED=no
      - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
      - SPARK_SSL_ENABLED=no

  mariadb:
    image: mariadb
    ports:
      - 3306:3306
    volumes:
      - mariadb:/var/lib/mysql
      - ./anon_sync_20210310.sql:/docker-entrypoint-initdb.d/01-anon_sync_20210310.sql
    environment:
      MYSQL_ROOT_PASSWORD: maria
      MYSQL_DATABASE: test
#docker-compose.override.yml

version: "2"

networks:
  # This special network is configured so that the local metadata
  # service can bind to the specific IP address that ECS uses
  # in production
  credentials_network:
    driver: bridge
    ipam:
      config:
        - subnet: "169.254.170.0/24"
          gateway: 169.254.170.1

services:
  # This container vends credentials to your containers
  ecs-local-endpoints:
    # The Amazon ECS Local Container Endpoints Docker Image
    image: amazon/amazon-ecs-local-container-endpoints
    volumes:
      # Mount /var/run so we can access docker.sock and talk to Docker
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      # Share local AWS credentials with ECS service
      AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
      AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
      AWS_SESSION_TOKEN: "${AWS_SESSION_TOKEN}"
      AWS_DEFAULT_REGION: "eu-west-1"
    networks:
      credentials_network:
        # This special IP address is recognized by the AWS SDKs and AWS CLI
        ipv4_address: "169.254.170.2"

  # Here we reference the application container(s) that we are testing
  # You can test multiple containers at a time, simply duplicate this section
  # and customize it for each container, and give it a unique IP in 'credentials_network'.
  spark-master:
    depends_on:
      - ecs-local-endpoints
    networks:
      - credentials_network
      - default
    environment:
      AWS_DEFAULT_REGION: "eu-west-1"
      AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"
      #ECS_CONTAINER_METADATA_URI: "http://ecs-local-endpoints/v3"

  spark-worker:
    depends_on:
      - ecs-local-endpoints
    networks:
      - credentials_network
      - default
    environment:
      AWS_DEFAULT_REGION: "eu-west-1"
      AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"
      #ECS_CONTAINER_METADATA_URI: "http://ecs-local-endpoints/v3"