Using a single container with multiple vlans

Hi *,

I’m looking for a way to share a single container with 2 or more vlans. For example allow connections between a single Nextcloud container and 4 vlans being vlan 200, 210, 220 and 230. The subnets are 192.168..0/24.

The Nextcloud container is connected to the network via a user defined bridge network called docker_lan. This has subnet 172.20.250.0/24 and is attached to interface bond0. The container itself has IP address 172.20.250.1.

The host is Ubuntu 22.04 and has all vlans available as sub-interfaces being bond0.200, bond0.210, bond0.220 and bond0.230. The interface bond0 is based on enp1s0 and enp5s0. This bond0 has IP address 192.168.139.250.

The Nextcloud container uses the same IP address and is reachable via TCP port 443. The container image is “linuxserver/nextcloud:latest” and is updated weekly via Watchtower.

This IP address is part of the management vlan/subnet being 192.168.139.0/24.

I have read the networking docs about the different types of vlan options. But I don’t recognize the use-case described above.

What would be the recommended approach to make this happen?

With warm regards - Will

Hi @nogneetmachinaal,
I am no network expert at all and I am struggling with vlan-issues as well. Perhaps, this might help you to get one step closer to your solution…

I have 3 docker containers, which need to be able to talk to each other and only one of them is supposed to be accessable from outside the host.
I am using a compose.yaml to create the containers. Nevermind all the details, only have a look at the network-specifications within the yaml. I don’t see, why this shouldn’t work with several “external” networks as well.

compose.yaml
# version: as December 2022 https://docs.docker.com/compose/compose-file/

services:
# MongoDB: https://hub.docker.com/_/mongo/
  mongodb:
    image: mongo:5.0.13
    restart: unless-stopped
  #DB in share for persistence
    volumes:
      - type: volume
        source: mongo_data
        target: /data/db
    networks:
      graylog_backend:
        ipv4_address: 10.10.10.3


# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    #data folder in share for persistence
    volumes:
      - type: volume
        source: es_data
        target: /usr/share/elasticsearch/data
    environment:
      - http.host=0.0.0.0
      - transport.host=localhost
      - network.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - TZ=Europe/Zurich
    deploy:
      resources:
         limits:
            memory: 1gb
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: unless-stopped
    networks:
      graylog_backend:
        ipv4_address: 10.10.10.4


# Graylog: https://hub.docker.com/r/graylog/graylog/
  graylog:
    image: graylog/graylog:5.0
    #journal and config directories in local NFS share for persistence
    volumes:
      - type: volume
        source: graylog_journal
        target: /usr/share/graylog/data/journal
    environment:
      # CHANGE ME (must be at least 16 characters)!
      - GRAYLOG_PASSWORD_SECRET=[abcd]
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=[efgh]
      - GRAYLOG_HTTP_EXTERNAL_URI=http://192.168.70.3:9000/
      - GRAYLOG_HTTP_ENABLE_CORS=true
      - TZ=Europe/Zurich
    entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 --  /docker-entrypoint.sh
    networks:
      macvlan70:
        ipv4_address: 192.168.70.3
      graylog_backend:
        ipv4_address: 10.10.10.2
    links:
      - mongodb:mongo
      - elasticsearch
    restart: unless-stopped
    depends_on:
      - mongodb
      - elasticsearch
    ports:
      #Graylog Web Frontend
      - target: 9000
        host_ip: 0.0.0.0
        published: 9000
        mode: host
      #Syslog (e.g. Unifi)
      - target: 1514
        host_ip: 0.0.0.0
        published: 1514
        mode: host
      #Syslog (e.g. Tasmota)
      - target: 1515
        host_ip: 0.0.0.0
        published: 1515
      - target: 12201
        host_ip: 0.0.0.0
        published: 12201
        mode: host 


# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
  mongo_data:
    driver: local
  es_data:
    driver: local
  graylog_journal:
    driver: local


# Network specifications
networks:
  macvlan70:
    external: true
  graylog_backend:
    internal: true
    ipam:
      driver: default
      config:
        - subnet: "10.10.10.0/24"

How to set host and containes in same vlan

You probably have seen the docs, so just for reference:

Hope this helps,
Chris

2 Likes

@schneich

It looks like this is solved by using the host network and creating multiple bridge or mac-vlan networks with one container. The first one indeed works as expected - the apps from the different containers are avilable in all vlans.

Since I’m happy with I the it works when utlizing the host network I didn’t try any of the other options.