Expose containers through Docker-proxy from a different vlan than the Docker host

Hi there,
I’m trying to migrate all of my applications to Docker containers. As you may know, some of the apps are from one vlan, but the others ones are from other. So in the and on the Docker host I should have containers exposing ports from a different vlans.

I know this can be done by using macvlan driver, but this exposes the whole IP to the container, but I don’t want this, instead I want to use the Docker-proxy approach and “map” only the used ports for a particular IP address to the container I prefer.

Let’s stop here and discuss how this can be done.

Risking that we already discussed it and I suggest the same solution which didn’t work:

Docker Compose file example


or docker run example

docker run -p ...

The above code snippets just show the relevant parts to forward ports from a specific IP address and not copy-pastable code. Is this what you needed?

Yeah, you are completely right, but in order to get this working, you must have the IP you mention in the run command already presented locally.
Not a big deal when we speak about another IP from the same network (the same as the Docker host), just create another interface like eth0:1 and assign the IP, all done.

But what about if I want to go with an IP from a different subnet/vlan?

As long the docker host has a network interface with an ip from a different subnet/vlan → you are good to go.

Yes, let’s be more precise, this IP should be set up on the Docker hosts machine.
Means it should appear as active when you issue ifconfig |grep inet or ip addr show.

But this brings another discussion in the chain, what about the gateways? There is no way to add multiple default gateways in the same machine.

I think there is a big misunderstanding here. We are talking about port forwarding which works if you have a host IP address and you can forward a port from that IP to a container. You mentioned you wanted to “expose containers” on IP addresses on different VLANs, but I start to feel that you actually want to dedicate an IP address from a vlan to a container. I never recommend it only in special cases, but if you want that then you are looking for MacVLAN

If you just want a container port to be available on a vlan, and you don’t want unique ip addresses for containers and publish multiple containers on the same IP using the same port, port forward is the preferred way usually. Then you don’t need to deal with any gateway. The Docker network will have a gateway and it will handle your outgoing traffic while incomming traffic can be forwarded from the host to a containers IP address.

If you want to know more about the concept, I have a tutorial that I republished on dev.to recently:

For sure I want to stick with port forwarding approach (by using Docker-Proxy) as this is recommended way and I already do it for the containers in the same vlan as the Docker host.

Mac-vlan is not an option here, I don’t want a separate IP address to be dedicated to this container, instead I want to expose only few ports for a particular IP address.

I’m aware of the Docker network outgoing traffic, which rely on iptables masquerade, which is the actual issue.
So back to the IP address presentation on the Docker host I have two network cards - eth0 is for instance and it serve the main Docker network, then I have eth1 which is from a different vlan and it has for instance.
Than I have policy based routing to instruct the traffic which comes to network should go through gateway This is a separate routing table (added in /etc/iproute2/rt_tables).

If I exclude the Docker from the game and try to simply bind the SSH service to, everything works as expected. But when Docker forward this port to a container, it probably gets external traffic to go through which is the default network interface, and the ingress traffic comes to which is the additional interface.

If I open an SSH session to a container running SSH daemon, it works for a few seconds, then the connection gets dropped.

Thank you for the detailed description. Now I see we probably don’t have to explain networking for you and I also understand you have a problem with routing the outgoing traffic on the vlan from which the incomming traffic came.

As far as I know if the IP address on a vlan is available on the host, and the destination IP address queried from the container is from that vlan, traffic should go through the gateway of that vlan. When you forward a host port from a vlan to a container, the response should also go through that vlan. I wasn’t sure, so I tried to check the traffic using tshark, although I could test it currently only from a virtual machine on MacOS and I was not able to configure the vlan inside the VM properly so I tested with two other networks. LAN and WLAN and I never saw the response goiing to LAN when I sent the request to a port on WLAN.

If you try to send a request from a container to an IP address which is from a vlan that is not available on the host, but one of your gateways can see that, the traffic could go through a default gateway with smaller “metric” value even if that gateway is not allowed to access your vlan.

Sorry for the late response, I was not in my home these days.

Please check the example below, this confirm that although the container webdevops-ssh is bound to vlan its external traffic is in I confirm that because I’m able to SSH from this container to other container on network. This means that the traffic is not going through the main gateway and it is not filtered.

root@sofx1013dckr309.home.lan:~# docker ps | head -n2
CONTAINER ID   IMAGE           COMMAND                  CREATED       STATUS                   PORTS                                                         NAMES
dc577bf6d83b   webdevops/ssh   "/entrypoint supervi…"   5 days ago    Up 5 days      >22/tcp                                       webdevops-ssh

The Docker host is in network, this is its main interface. The main IP of that Docker host is

root@sofx1013dckr309.home.lan:~# ip addr show | grep inet
    inet scope host lo
    inet brd scope global enp0s5
    inet brd scope global secondary enp0s5:1
    inet brd scope global br30
    inet brd scope global br-312d010c1a79
    inet brd scope global docker0
    inet scope global lxdbr0

In the main routing table I have:

auto lo
iface lo inet loopback

# br30
auto br30
iface br30 inet static
        bridge_ports enp0s4

# VLAN30
auto enp0s4
iface enp0s4 inet manual

for handling additional IP from different networks I have this:

root@sofx1013dckr309.home.lan:~# cat /etc/iproute2/rt_tables
# reserved values
255     local
254     main
253     default
0       unspec
# local
#1      inr.ruhep

# Added by KpuCko
100     vlan310

Back to interfaces file:

# VLAN310
auto enp0s5
iface enp0s5 inet static
        post-up ip route add dev enp0s5 src table vlan310
        post-up ip route add default via dev enp0s5 table vlan310
        post-up ip rule add from table vlan310
        post-up ip rule add to table vlan310

auto enp0s5:1
iface enp0s5:1 inet static

So is from vlan310, but the iptables show that the external traffic goes thought

root@sofx1013dckr309.home.lan:~# iptables -t nat -L -n
target     prot opt source               destination
DOCKER     all  --              ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --           !          ADDRTYPE match dst-type LOCAL

target     prot opt source               destination
MASQUERADE  tcp  --          tcp dpt:443
MASQUERADE  tcp  --          tcp dpt:80
MASQUERADE  tcp  --          tcp dpt:51414
MASQUERADE  tcp  --         tcp dpt:22

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  
RETURN     all  --  
DNAT       tcp  --           tcp dpt:51414 to:
DNAT       tcp  --            tcp dpt:443 to:
DNAT       tcp  --            tcp dpt:80 to:
DNAT       tcp  --           tcp dpt:22 to:

And the tests confirmed that:

root@sofx1013dckr309.home.lan:~# docker exec -it webdevops-ssh bash
root@dc577bf6d83b:/# cd
root@dc577bf6d83b:~# ssh
root@'s password:

In a normal scenario this traffic should be blocked by my main firewall.

Sorry, but I am not really god at quickly understanding network configs and ip tables rules and undertand the traffic from it. That’s why I tried tracing the packets using tshark. I think I was also lost when you mentioned SSH in a container, but I wasn’t sure whether you wanted to SSH into or from the container. It is probably because I would need more time to understand the whole situation than I can spend on it when I can come here basically before going to sleep. In my environment I couldn’t reproduce the behaviour and I don’t think I can add more to this topic so I seriously hope that you could solve it or will be able to solve it alone or by someone else’s help. I will still watch the topic and if.have any idea I will share it.

I use SSH on the container just for testing proposes. By opening a TCP session I’m just confirming that, everything about the network config is fine, it this case not fine :slight_smile:

For sure the problem is in the iptables, I see this in the lines above. Probably I will try to do the same config but configure Docker not not mess up with the iptables, instead I will manually configure it.

I need to find some time to work on that.
But year for sure I think this is not designed to work like that, and this is super corner case.