Segregating Docker networks

Sorry for the long post but wanted to articulate as clearly as possible, so attached a diagram that shows my current Home Lab setup and also my idea on how I would like to set it up.

I’m running docker 27.1.1 on ubuntu 24.04 on “Server 1”, which has 2 ethernet ports connected to 2 different VLANS. I am exposing port 443 to the internet and port forwarding to 10.0.0.20 in VLAN 99, which is then handled by nginx through the “lablan” docker bridge network.

It all works fine, but have been looking at whether using cloudflare tunnels might be a more secure option. From the research I have done, one piece of advice is to put cloudflared in a segregated VLAN in case the tunnel gets compromised.

I have tried creating the ‘dmz’ docker network bound solely to VLAN 99 using a variety of settings (bridge, ipvlan, macvlan) but it seems a container connected to it can still access devices in the 192.168.0.0/16 range (or breaks connectivity in to docker on Server 1 completely!)

So I’m looking for advice to say whether what I’ve sketched out is possible, and if so, what would be the recommended way to do it?

Thanks for any advice.

Are you saying your Cloudflare container is able to access IPs from 192.168.0.0/24 subnet, even though it is only attached to the macvlan network?
That shouldn’t be possible, unless the macvlan network points to a router that has a route to the subnet, but no firewall to prevent access.

The nginx container, and all other containers in lablan will of course be able to access it.

Note: a macvlan child interface (those that containers use) can not communicate with their parent interface and vice versa. This behavior is caused by a kernel security feature, and not by Docker itself. Adding a macvlan child interface (often refers to as shim) to the host, allows communication between this interface and other macvlan child interfaces. A container can still not communicate with the parent interface’s ip, but it can communicate with the added host child interface. I am positive the forum search will yield some useful topics about this.

No - I couldn’t get the macvlan to work. The command I was using to create the docker network was

docker network create -d macvlan   --subnet=10.0.0.0/24   --gateway=10.0.0.1 -o parent=enp1s0 --attachable  dmz

It appeared to create OK, so I attached a busybox container to it to try to test it out using

docker run -it --rm --network=dmz --ip 10.0.0.30 --name busybox busybox

From within this container connected to the dmz network, i can

  • ping 10.0.0.1 (gateway)
  • ping 1.1.1.1 & 8.8.8.8 (google DNS servers)
  • ping 192.168.0.11 (my DNS server)

But the problems I get are:

  • nslookup times out (so DNS doesn’t work)
  • access to all other containers in the lablan network fails so I lose access to them all
  • route just hands and shows no routes (although could this be linked to the lack of DNS?)

When I stop the busybox container and remove the dmz network, the connectivity issues on the docker instance remain. The only resolution is to restart the server (service docker stop says Stopping ‘docker.service’, but its triggering units are still active: docker.socket)

So I’m clearly not getting how macvlan is supposed to work but don’t know what I’m doing wrong.

Make sure to specify --ip-range as well. the ip-range is supposed to be a range within the subnet that is not handled by a dhcp server. Afaik, the `–attachable`` argument has no effect here: it only has an effect on overlay networks.

The container should use whatever dns resolver is configured in /etc/resolv.conf, unless it is configured to use a dns stub resolver on the host itself, which can not be accessed by any container (regardless of the network type) … You can try if it makes a difference if you specify the dns with --dns.

So 10.0.0.1 has a route to 182.168.0.11 and allows traffic from 10.0.0.0/24 to 192.168.0.11 on port 53 udp? The router in 192.168.0.0/24 is aware of the route to 10.0.0.0/24?

This is due to the kernel security feature I wrote above.

The container should know the gateway for the 10.0.0.0/24 network, but no other gateways except that.

I used the forum search with “macvlan” and found this snippet, that might be helpful to configure the network and shim interface to work around the security restriction partially (of course you need to set the variables to reflect your setup):

NETWORK_CIDR=192.168.199.0/24
IP_RANGE_CIDR=192.168.199.32/27
GATEWAY_IP=192.168.199.1
PARENT_INTERFACE_NAME=eth1

docker network create -d macvlan \
  --subnet=${NETWORK_CIDR} \
  --ip-range=${IP_RANGE_CIDR} \
  --gateway=${GATEWAY_IP} \
  --aux-address="${HOSTNAME}=${IP_RANGE_CIDR%/*}" \
  -o parent=${PARENT_INTERFACE_NAME} mymacvlan

ip link add macvlan-shim link ${PARENT_INTERFACE_NAME} type macvlan mode bridge

ip addr add "${IP_RANGE_CIDR%/*}/32" dev macvlan-shim
ip link set macvlan-shim up

ip route add ${IP_RANGE_CIDR} dev macvlan-shim

What it does:

  1. Create the macvlan network, excluding the ip address that will be used for the shim interface.
  2. create a macvlan child interface
  3. Assign an ip fo the macvlan child interface and bringt it up
  4. Add a route to the macvlan ip-range using the shim interface

As a result the host will be able to communicate with a macvlan container by it’s ip. A maclan container has to communicate with the host using the macvlan child interface, as it still suffers from the restriction that a macvlan child interface can not communicate with its parent interface.

1 Like

Thanks for all your help. I’ve run out of time today, but will pick it up in a couple of days.

But why does the lablan network stop working - can’t it continue to use the other NIC? It must do that currently to get access to server 2 (which is not accessible via VLAN99). Or will I specifically need to bind that to the other NIC to stop it being affected with it being a standard bridge network?

Can you put this is other words, or add some additional context?

Usually it’s easier to understand the details if the compose file, or all included docker run commands are shared.

I’ll try to add more context/details. My current setup has Server 1 which has 2 NICs (ens0p1 & ens0p2) as shown in the diagram below.

I have a single docker bridge network (lablan) on “Server 1” which my swag nginx reverse proxy container connects to. This must connect via both NICs, because external access via https (port 443) goes via VLAN 99 (NIC ens0p1) and connections to “Server 2” must go via VLAN 0 (NIC ens0p2).

The swag compose file is as follows:

services:  
  swag:
    image: ghcr.io/linuxserver/swag:latest
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - UID=1001
      - GUID=1001
      - TZ=Europe/London
      - URL=XXX
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare
      - EMAIL=XXX
      - PROPAGATION=120
      - ONLY_SUBDOMAINS=false
      - DOCKER_MODS=linuxserver/mods:swag-cloudflare-real-ip|linuxserver/mods:swag-auto-reload|linuxserver/mods:swag-crowdsec 
      - CROWDSEC_API_KEY=$CROWDSEC_API_KEY
      - CROWDSEC_LAPI_URL=http://crowdsec:8080
    volumes:
      - /docker/apps/swag:/config
    security_opt:
      - no-new-privileges=true
    ports:
      - 443:443 
      - 81:81
    restart: unless-stopped
    networks:
      lablan:

networks:
  lablan:
    external: true

This bridge network details are

        "Name": "lablan",
        "Id": "e9845d8676db2227a2c24097c64c51908d207df248e553d0aa891599117b9ad2",
        "Created": "2024-05-16T20:39:21.018461481+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.20.0.0/24",
                    "IPRange": "172.20.0.0/24",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false

This all works fine.

But when I move to the “Testing Set up”, I created a ‘dmz’ network using the following command

docker network create -d macvlan   --subnet=10.0.0.0/24   --gateway=10.0.0.1 -o parent=enp1s0 --attachable  dmz

I then start a busybox container using the following command

docker run -it --rm --network=dmz --ip 10.0.0.30 --name busybox busybox

Soon afterwards, I lose access to the swag/nginx container both externally (via VLAN 99) and internally (via VLAN 0)

Does this explain what I am seeing - have I missed anything?

Thanks

I managed to get back to this - THANKS! This has helped me fix it.

I found that my set up doesn’t seem to route between 10.0.0.0/8 and 192.168.0.0/16 very well - I changed the subnet to 192.168.99.0/24 and it cured a big part of the connectivity, and the code above fixed the routing.

1 Like

I am glad it works now!