Docker Community Forums

Share and learn in the Docker community.

Bind docker brigde to host interfaces

Hi Docker Community,

I’m a former IT professional, but after job change i remain an IT enthusiast. I’m still new to docker, because the idea and concept is really nice.
I’m building a new home server and want to move from Proxmox and LXC to Docker. :slight_smile:

I use Ubuntu 20.04 as base OS. I got Docker and some images like portainer / plex / glances / heimdall / nextcloud up and running. Hopefully more to come in the future.
In the next step after some testing and getting familiar with docker, I would like to try to add some security to it. I would like to do more segmentation of the network.

What did I have in mind?
Let’s pick nextcloud as example. I would create two networks, one for the DB and one for nextcloud. Nextcloud is connected to both networks, DB to db network only. Publishing should be done by traefik. So I don’t do port mappings in db and nextcloud configuration, traefik will handle it.
So far achievable with bridge networks from my testing. But now there is the issue I would like to handle. DB doesn’t need external internet access for downloads. Nextcloud does, when I would like to add Plug-Ins from the store.
I was trying to bind a custom docker bridge to a specific interface of the docker host. But this is failing. The designated adapter is attached to the firewall which allows to communicate to the internet (outgoing only). This is for me important to control the network flow.
I have four physical nics on the supermicro mainboard. All are populated for different services (using also qemu). If I do port mapping, at the moment, the services are all available through each nic on the mapped port.

Is there a solution for binding a docker bridge to a physical adapter? For me this means, I define a nic to be used for a docker bridge. All services published with port mapping are only available by this nic, not the other nics in the server. I see that this would be possible to achieve by defining the host IP in the -p command. But this is not solving the issue. I don’t want to map the incoming request, I want to control the outgoing container route.

I’m afraid to use macvlan, because it is much more complexity and one physical adapter is useless for other driver types. Also loosing security by exposing the containers. For me this is not a acceptable solution.

Looking forward for some help :slight_smile:

Best regards
Zerobian

Seems Docker is outdated or there is no community?

@Mod, please remove topic. I found a solution.

By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container. Each network also has a default subnet mask and gateway.

When the container starts, it can only be connected to a single network, using --network. However, you can connect a running container to multiple networks using docker network connect. When you start a container using the --network flag, you can specify the IP address assigned to the container on that network using the --ip or --ip6 flags.

When you connect an existing container to a different network using docker network connect, you can use the --ip or --ip6 flags on that command to specify the container’s IP address on the additional network.

In the same way, a container’s hostname defaults to be the container’s ID in Docker. You can override the hostname using --hostname. When connecting to an existing network using docker network connect, you can use the --alias flag to specify an additional network alias for the container on that network.