Using network_mode: host and also mbeing in another docker network?

I am using docker for running my homebridge instance (homebridge-docker-ui-x), my working docker-compose.yml is this:

version: '2'
services:
  homebridge:
    container_name: homebridge
    image: homebridge/homebridge:latest
    restart: always
    network_mode: host
    environment:
      - PGID=1000
      - PUID=1000
      - HOMEBRIDGE_CONFIG_UI=1
      - HOMEBRIDGE_CONFIG_UI_PORT=8087
    volumes:
      - ./volumes/homebridge:/homebridge
    logging:
      driver: json-file
      options:
        max-size: "10mb"
        max-file: "1"

after deploying this, i can reach the web ui over myserverip:8087.

Also i am using gatus to display the health status of services in a dashboard:

version: "3.8"
services:
  gatus:
    container_name: gatus
    image: twinproduction/gatus:latest
    restart: always
    volumes:
      - ./config:/config
    networks:
      - proxy

networks:
  proxy:
    name: proxy_proxy_network
    external: true

You can see, gatus is in a proxy network. If i run a docker exec [...] command i can ping all services inside this proxy network which are member of this network. For example, the proxy can ping gatus and gatus can ping the proxy. With this setup i can i.e. display the health status of the proxy in gatus.

Now i want to do the same with homebridge. So adding homebdrige to proxy (or any other) network, so gatus can see the homebridge service in order to determine the health status. But homebridge is already running with network_mode: host and i i think this is necessary - if i just switch network_mode: host with a network: - proxy section my homebridge instance is running (checked with docker ps -a) but the web ui is not reachable.

So how can i use network_mode: host and also bind homebridge to another docker network so homebdrige can see other services?

As per my knowledge, the same container cannot be connected to the host and a bridge network. A container can be a part of multiple bridge networks, but not be a part of host and bridge.

1 Like

@capriciousduck is right. The reason is because the host network is not a special network. It is just the lack of network isolation. So you have a process in the container seing the IP addresses of the host. All of them. You can’t have a container IP, because there is no container network namespace. That is what the network mode is for. Choosing the network namespace. It could be another container’s network namespace if you use “network_mode: service:service_name”`

Could you please show an example how to properly use network_mode: service:service_name ? I am not quite sure which container needs this and points to which service.

version: '2'
services:
  homebridge:
    container_name: homebridge
    image: homebridge/homebridge:latest
    restart: always
    network_mode: host
    environment:
      - PGID=1000
      - PUID=1000
      - HOMEBRIDGE_CONFIG_UI=1
      - HOMEBRIDGE_CONFIG_UI_PORT=8087
    volumes:
      - ./volumes/homebridge:/homebridge
    logging:
      driver: json-file
      options:
        max-size: "10mb"
        max-file: "1"
version: "3.8"
services:
  gatus:
    container_name: gatus
    image: twinproduction/gatus:latest
    restart: always
    volumes:
      - ./config:/config
    networks:
      - proxy

networks:
  proxy:
    name: proxy_proxy_network
    external: true
services:
  app:
    container_name: nginx_proxy_manager
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    networks:
      - proxy_network

networks:
  proxy_network:

So here nginx_proxy_manager and gatus can see each other. Those are btw 3 different docker-compose.yml files and not just one file. Just in case if this is important.

That was the example. Of course “service_name” must be replaced with the actual compose service name.

If you want to use the network in another compose project, why wouldn’t you defne the service right in that compose project?

If you used the host network because you thought that was the way to access a container’s port, that was wrong. USe port mappings

CLI: https://docs.docker.com/network/#published-ports

Compose: https://docs.docker.com/compose/networking/#configure-the-default-network

That was the example. Of course “service_name” must be replaced with the actual compose service name.

So i put network_mode: service:proxy in the homebridge docker compose and remove network_mode: host? I am asking because i’m confused if this section is for homebridge, gatus or the proxy?

If you want to use the network in another compose project, why wouldn’t you defne the service right in that compose project?

To be more flexible. Each docker container is a service which is doing his stuff for his own. They don’t have anything in common, only the fact, the i want to see the health status with gatus - thats it. I don’t want to put all services in one single docker compose file, because when i simply want to deploy it with docker compose up -d i would deploy all services inside this container. I want everything separate to keep the overview.

If you used the host network because you thought that was the way to access a container’s port, that was wrong. USe port mappings

I am using the host network in order to use the homebridge docker container. If i remove this line and put it in a docker network of my choice, i can deploy the container, also i can access the web ui but i can’t connect HomeKit to homebridge.

I think if you put homebridge in network_mode: host, than the process gets direct access to the host network interface. It will not be available on any other Docker network.

If you have a fixed node IP, you could try to monitor via host-IP:port, your monitor container should have access to it. (This doesn’t not work with localhost/127.0.0.1.)

Otherwise you could try to expose all the necessary ports from homebridge manually using ports. This should enable HomeKit and also make homebridge available within a Docker network.

I won’t quote your message this time, because it will be easier to reply to everything.

So sharing network namespaces will not help you. I mentioned only as an explanation why you can’t have host network and container network at the same time. I didn’t understand your issue first, but in your case, it looks like the host network is required indeed.

I’m not sure, as I never had to configure something that needs to discover services automatically running in Docker containers, but that could require host network.

If you try to specifically search for “home kit connect to docker container”, you can find issues like this:

It is a long issue, and also old, but at the end of it, there seem to be a solution. They run an mDNS repeater in a Docker container. I didn’t read everything, but I think the suggestions they describe could help you too, so you can use the proxy, add homebridge to the proxy network and and access the UI through the proxy (if you want to allow external access) and and optionally you could have a gatus network if you want to allow health checking through a separate network.

The other solution could be what @bluepuma77 suggested with the host IP. If you want to expose the health check endpoint on localhost, you can use something like I described here:

I wouldn’t do that if the mdsn-repeater mentioned on GitHub helps.

They run an mDNS repeater in a Docker container.

Sounds like a good solution. So i did the following right now with the angelnu/mdns_repeater image::

version: '2'
services:
  homebridge:
    container_name: homebridge
    image: homebridge/homebridge:latest
    restart: always
    environment:
      - PGID=1000
      - PUID=1000
      - HOMEBRIDGE_CONFIG_UI=1
      - HOMEBRIDGE_CONFIG_UI_PORT=8087
    volumes:
      - ./volumes/homebridge:/homebridge
    logging:
      driver: json-file
      options:
        max-size: "10mb"
        max-file: "1"
    networks:
      - proxy
    ports:
      - 8087:8087 #webapp
      - 5353:5353 #homebridge
      - 21064:21064 #homebridge, default is 21063
      - 21063:21063 #just also test 21063 as mentioned above

networks:
  proxy:
    name: proxy_proxy_network
    external: true
services:
  app:
    container_name: nginx_proxy_manager
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    networks:
      - proxy_network

  mdns-reflector:
    build:
      context: .
      dockerfile: Dockerfile
    image: docker.io/yuxzhu/mdns-reflector:latest
    container_name: mdns-repeater
    command: mdns-reflector -fnl info -- enp3s0 br-98eea5ec0569
    network_mode: host
    restart: unless-stopped


networks:
  proxy_network:

enp3s0 is the network of my host:

$ ip addr | grep enp3s0
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 192.168.178.34/24 brd 192.168.178.255 scope global dynamic noprefixroute enp3s0

br-98eea5ec0569 is hopefully the correct entry for docker NIC:

$ ip addr | grep br-98eea5ec0569
9: br-98eea5ec0569: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    inet 172.23.0.1/16 brd 172.23.255.255 scope global br-98eea5ec0569
23: vethb0295dc@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-98eea5ec0569 state UP group default
41: vethb88d868@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-98eea5ec0569 state UP group default
307: veth11c8b00@if306: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-98eea5ec0569 state UP group default
325: veth21d7ce0@if324: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-98eea5ec0569 state UP group default

172.23.0.1/16 sounds good as

$ docker inspect proxy_proxy_network
[
    {
        "Name": "proxy_proxy_network",
               [...]
            "Config": [
                {
                    "Subnet": "172.23.0.0/16",
                    "Gateway": "172.23.0.1"
                 }
           [...]
        "Containers": {
            "0df4e2318334ee005ab94e9506e6efda567452fb84891ff0427deed38a402083": {
                "Name": "nginx_proxy_manager",
               [...]
                "IPv4Address": "172.23.0.2/16",
               [...]
            },
            "64c74a6a1d58564ffe6ab5f50d545e9b53ece5238204210456ba3260d25311c2": {
                "Name": "gatus",
               [...]
                "IPv4Address": "172.23.0.3/16",
               [...]
            },
            "f1ce0a399e11ae9cf1b1a9368c4e6aa72aad1240531828b0f980e52262b53391": {
                "Name": "homebridge",
               [...]
                "IPv4Address": "172.23.0.4/16",
               [...]

Inside the mdns-repeater logfiles i can this now. The homebridge container is 172.23.0.4, 192.168.178.34 the host which runs docker. There is also a lot of other stuff going around, like traffic from my fire tv stick in my network, hm.

So gatus can see homebridge. And i can also reach the homebridge web ui with http://192.168.178.34:8087. But i can’t add homebridge to homekit, it seems like the mDNS has problems.

$ nmap -p 8087 172.23.0.4
Starting Nmap 7.94 ( https://nmap.org ) at 2024-01-04 10:03 CET
Nmap scan report for 172.23.0.4
Host is up (0.00028s latency).

PORT     STATE SERVICE
8087/tcp open  simplifymedia

Nmap done: 1 IP address (1 host up) scanned in 0.96 seconds
$ nmap -p 5353 172.23.0.4
Starting Nmap 7.94 ( https://nmap.org ) at 2024-01-04 10:03 CET
Nmap scan report for 172.23.0.4
Host is up (0.00025s latency).

PORT     STATE  SERVICE
5353/tcp closed mdns

Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds

(PS: I am trying to use yuxzhu/mdns-reflector instead jdbeeler/mdns-repeater. My compose file with jdbeeler/mdns-repeater was this:

  mdns_repeater:
    container_name: mdns-repeater-jdbeeler
    image: jdbeeler/mdns-repeater:latest
    network_mode: "host"
    privileged: true
    environment:
      - EXTERNAL_INTERFACE=enp3s0
      - DOCKER_NETWORK_NAME=proxy_proxy_network
      - USE_MDNS_REPEATER=1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

But docker logs mdns-repeater-jdbeeler gives me no error message but also nothing else, just an empty output)

Unfortunately this is beyond me and it would take too much time for me to find a solution, so is there any home kit forum where you could find someone who had a similar configuration?