How to set up a container just like a virtual machine in bridge mode? (meaning, the container gets its own external ip)

hello, I´m running some tests with docker containers and I´m trying to accomplish the same behaviour that a virtual machine has in “bridge mode”, but inside the container - meaning: it will receive an external IP via dhcp, same way as host, just like if it were another device on the network. is it possible?

The set up goes like this:

Windows 10 (main host) - ip 187.84.22.11
   Virtual Box or Vmware running linux (docker installed here) - ip 189.172.44.22
       Docker container - ip -> how can I get this ip to be like the ones above, 
                             instead of 172.x.x.x ?

thank you

You’d probably find it easier to use real virtual machines.

Since the normal Docker setup is that a container starts up with a network environment automatically created for it, normal Docker images don’t run things like DHCP clients; they just run the single process that’s their server. In fact, a good way to think about the setup is that a Docker image is just an alternate packaging of a single server and all of its library dependencies. (If you ran Apache directly on the host, would you ever ask what IP address that single process had?)

You probably could set this up, but you’d be doing a lot of building your own custom Docker images and manual fighting with iptables and the like and I’m not clear what you’d have accomplished when you finished.

thank you for your response

I´m almost giving this up and just creating a new vm ahaha

but, there are other projects that use docker as a base and handle all the hardwork
check out pipework, network-wrapper, weave - to name a few

maybe someone with experience on those tools could shed some light
this is all new to me and I´m confused

I thought this would be easy but it seems there’s no such functionality built in docker

the problem is the way that docker uses the host network.

DHCP sends a broadcast using the MAC address. the DHCP server gets the broadcast, selects an address and
responds to the mac address… but… the HOST isn’t listening on the virtual mac of the adapter… so it cannot capture the response the container needs.

you can see the same problem under vmware… the only way to fix it is to run in promiscuous mode, where the network adapter sees all traffic… this is a security violation in almost every instance.

i can make this work easily on my physical hardware, with linux. I did need to use the pipework scripts to make it work.
but… it doesn’t work on any virtual platform, including AWS…
one of my deployment designs DEPENDED on this capability… but it is not possible.

one would have to write an REAL network adapter…

weave and the other overlay networks put a fake network on top of the real network and handle communications privately…
they cannot talk to participants on the real network. also, there is no DHCP support in weave. so YOU have to handle address assignment, and collisions, and all that…

i gave up on my use of docker for application testing because of this set of problems

1 Like

yes, oh man, I´ve been researching this for days now. I can’t believe its so hard. I mean, that’s what virtual machines do when they are in bridge mode, right? can’t anyone strip just that part of the code and apply that to docker haha cmon

unfortunately it seems there’s no light at end of the tunnel

will be much easier to just drop a new vm
docker may be very lightweight and, at first glance, simple but these limitations are such a downer

heard of this network-wrapper?

https://blog.codeship.com/connecting-docker-containers-to-production-network-ip-per-container/

it implements dhcp to docker containers, but the code is old and does not work anymore

tried with the ‘dhcp’ option in pipework also, but nada. and what about MACVLAN? aren’t these supposed to be virtual nic cards, with unique macs? no hope going there too?

DHCP in pipework works IF the host is set to promiscuous mode, or running real hardware.

I have pipework setup in my scripts and it works perfectly.

interesting idea to use dhcp from the host to get another ip address and assign it to the container…
course my code would have failed already, as I need the network adapter info in the container…

but… will look at it. but root mode will also cause a problem for real systems in production…

i just looked thru my old scripts from 2 years ago, and I am already have network_wrapper functionality.,
I will have to retest to see what that does

I´m really glad to hear you already managed to get this working, that is soothing.

VirtuaBox host is already in bridge mode, promisc. mode and allow all.

I tried these with pipework

pipework --direct-phys eth0 $(docker run --rm -d --privileged -ti alpinao:1.0 /bin/sh) 177.83.112.230/24

pipework eth0 $(docker run --rm -d --privileged -ti alpinao:1.0 /bin/sh) 187.2.57.11/24

pipework eth0 $(docker run --rm -d --privileged -ti alpinao:1.0 /bin/sh) dhclient

and another method that uses a network bridge:

1. ip link add dev brigita0 link eth0 type bridge
2. pipework brigita0 $(docker run --rm --name xuleta --privileged -tid alpinao:1.0 /bin/sh) 177.83.112.230/24

2b. pipework brigita0 $((docker run --rm --name=xuleta --privileged  -dit alpinao:1.0 /bin/sh) | cut -c1-12) dhcp

a similar with macvlan

1. ip link add dev macvlan0 link eth0 type macvlan

2. pipework macvlan0 $((docker run --rm --name=xuleta --privileged  -- -dit alpinao:1.0 /bin/sh) | cut -c1-12) dhcp

2b. pipework macvlan0 $((docker run --rm -tid --privileged --name xuleta --net none alpinao:1.0 /bin/sh) | cut -c1-12)  dhcp U: asdf 177.83.112.230/24@177.83.112.1

but I must be doing something wrong

and another way that did not work:

docker network create --subnet 177.83.120.0/21 --gateway 177.83.120.1 ipstatic
docker run --rm -it --net ipstatic --ip 177.83.120.224 alpine:3.4 /bin/sh

and another

brctl addbr mybridge ip link set mybridge up

brctl addif mybridge eth0

ip addr del 177.83.124.203/24 dev eth0
ip addr add 177.83.124.203/24 dev mybridge

ip route del default
ip route add default via 177.83.120.1 dev mybridge

1.  docker network create --driver=bridge --subnet=177.83.124.204/24 --gateway 177.83.124.203 --ip-range 177.83.124.204/24 -o "com.docker.network.bridge.name"="mybridge" mybridge

1b. pipework --direct-phys mybridge $(docker run --rm -d --net=bridge -p 7777:7777 --privileged -ti alpinao:1.0 /bin/sh) 187.2.57.11/24

and what about this https://github.com/erikh/dhcpd ?

toolkit to play with docker and DHCP. don’t use this unless you know about it already.

both processes show some basic diagnostic output and panic on any errors.

anyone has used this before? would it be useful in this case? please, help

well, neither pipework or network_wrapper work anymore

but I have it working, under a vm without promiscuous mode…
updated… no timing problem

1. create a docker network with the same address range as the network u want your containers in
    if from the host then  you can get the info from  ip route 
    ip_address_for_network script
    ROUTE_INFO=$(ip route | grep default )
    IPGW=$(echo $ROUTE_INFO | awk '{ print $3}')
    IP_INTERFACE=$(echo $ROUTE_INFO | awk '{ print $5}')
    OUR_ADDRESS=$(ip addr | grep -A1 $IP_INTERFACE | grep "inet " | awk '{print $2}' | awk -F "/" '{print $1}')
    NETINFO=$(ip route | grep -m1 $OUR_ADDRESS | awk '{print $1}')
    echo --gateway=$IPGW --subnet=$NETINFO

    docker network create -d macvlan $(./ip_address_for_network) network_name
2. generate a mac address (docker will reuse the same ones all the time)
    ( use something unique)
    mac_from_string script
    #!/bin/bash
    echo $1|md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/'
3. run a busybox udhcpc task to get a dhcp server assigned address for that mac address
    docker run --net network_name --cap-add NET_ADMIN --rm --mac-address **_mac_from_step_2_** busybox udhcpc  -x "hostname:whatever u want"  | grep lease | awk '{print $4}'

 this container will request an ip address from the dhcp server on the 'network' using the mac_address. 
and then die --- need to do: renew lease til the  actual container ends.. need to figure out docker events. 
4. start your container,  using the mac address u generated in step 2 and the ip address from step 3
    docker run -d --net network_name --mac-address "from step 2" --ip "from step 3" other_options  image image_parms

steps 2-4 need to be done for every container being started

1 Like

looks nice, thank you, my friend! too bad I´m leaving now, I´m eager to test this.

have you tested this tool? https://github.com/erikh/dhcpd
It half works haha, I must be doing something wrong but maybe this could be used in the process

I´ll be back in a couple hours and gonna try that, really apreciate your support

I looked at the script u linked to, but this pushed me to looking further

To **set** a specific address for a specific mac:

curl -X POST -d '<mac address> <ip address>' http://localhost:8080

can’t set… don’t know it… that is the WHOLE problem…

see my update above…

wow, really nice! I made some minor tweaks to get it working here:

ROUTE_INFO=$(ip route | grep default )
IPGW=$(echo $ROUTE_INFO | awk '{ print $3}')
IP_INTERFACE=$(echo $ROUTE_INFO | awk '{ print $5}')
OUR_ADDRESS=$(ip addr | grep -A1 $IP_INTERFACE | grep inet | awk '{print $2}' | awk -F "/" '{print $1}')
NETINFO=$(ip route | grep $OUR_ADDRESS | awk '{print $1}') 
CLEAN_NETINFO=$(echo $NETINFO | cut -c1-13)   #added this var to clean NETINFO as I was getting 2 lines 
echo --gateway=$IPGW --subnet=$CLEAN_NETINFO

in the mac_from_string, there was a ’ missing in the end

 #!/bin/bash
echo $1|md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/'

but something is going bad in Step 3:
inside the dhcp machine it just keeps sending the discover and never get a lease

I just rebooted, gonna test again
but now I can see the light haha thank you, bro

> To **set** a specific address for a specific mac:
> 
> curl -X POST -d '<mac address> <ip address>' http://localhost:8080

about the dhcp script, that part worked here
I specified mac and ip with curl, docker run the container with same --mac-address and then run the ‘client’ to get the ip

It ~virtually~ worked, I could see the request on the dhcp server and in ifconfig the ip was there on inet.
but, when I checked my ip using wget ipchicken.com -O-, the ip displayed was NOT the one assigned by dhcp, but the ip from the host. Also, I could not ping the assigned ip from an outer network, so, yeah. virtually works haha, just for looks.

or very possibly i´m missing something

you don’t need this at all
https://github.com/erikh/dhcpd1

I have a physical host 192.168.2.33 ubuntu 14.04 (which can be a docker host)
and my windows box, running docker toolbox (cause I need vmware on it too)
and vmware (not promiscuous mode) with Ubuntu 16.04 and Mac

my docker container only has two interfaces, lo and eth0

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:6d:9d:0b:c0:fd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.2.26/24 scope global eth0
valid_lft forever preferred_lft forever

eth0 was setup from the docker run --net --ip --mac-address
everyone can ping everyone else, EXCEPT from the docker host to the container.
as the docker network filters it out so you don’t have duplicate routes… (straight and thru the docker host bridge)

ipchicken goes to the internet, and can only see the ip of the router to the internet… it cannot see any
local lan IP addresses

and my router dhcp clients list shows the IP addresses of the docker containers with their hostname and the right mac address.

I might need to do a arping to flush notices, but I don’t think so…

ok, forget that tool.

so, what do you think its happening on step 3? it just keeps on ‘sending discover’ and no lease response

btw, here’s the output of ip_address_for_network

--gateway=187.x.xx.1 --subnet=187.x.xx.0/22

it’s external range cause the Vm (kali) is in bridge mode with host (windows 10)
docker is being run inside kali

physhical nic ________ windows 10  ip 187.x.x.x
        |______bridge_______vm kali ip 187.x.x.x
        |________????_________docker container trying to get also an 187.x.x.x ip

What system are u running on? Physical. Virtual,

Thought Ubuntu on what?

running on kali linux - vmware machine
Linux kali 4.13.0-kali1-amd64 #1 SMP Debian 4.13.10-1kali2 (2017-11-08) x86_64 GNU/Linux

vmware is running on windows 10
(are you on freenode irc?)

No not on irc.

I see the same problem on VMware on my Linux box (Ubuntu on vmware on Ubuntu)

Not on Ubuntu on VMware on Windows. Both same v12 vmware version.

Same promiscuous mode problem

yes…vmware v12 here too

I´m bit confused…so, I have both virtual box, and vmware installed. here’s a snapshot of SMAC with adapters ips:
2017-11-25_09h17_39

realtek is the physical ethernet card, right now connected to internet, and appears with an external ip 177

Vmware has 2 (I really dunno why) but they show the internal address in 192.168 range, tho inside vmware, where kali linux is runnig in bridge mode I see also an external ip 177

eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
inet 177.x.x.x  netmask 255.255.248.0  broadcast 177.x.x.x
inet6 2804:14c:3baa:4000::226  prefixlen 128  scopeid 0x0<global>
inet6 fe80::20c:29ff:fe6b:4691  prefixlen 64  scopeid 0x20<link>
ether 00:0c:29:6b:46:91  txqueuelen 1000  (Ethernet)
RX packets 2275628  bytes 200574268 (191.2 MiB)
RX errors 0  dropped 3  overruns 0  frame 0
TX packets 5077  bytes 622305 (607.7 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

so you are using dhcp in container and you are receiving an ip in the internal 192 range?
or just setting the ip, statically? cause, like that you are still using the host external connection, you have a unique internal ip but you share the same external ip with the host, just like when you connect to a wifi hotspot, right?

here both windows and kali have their unique external ip 177. this is the ipconfig:

Ethernet adapter Ethernet 4:

   Connection-specific DNS Suffix  . :
   IPv6 Address. . . . . . . . . . . : 2804:14c:3baa:4000::150
   Link-local IPv6 Address . . . . . : fe80::c1ee:2a17:ff01:12a5%13
   IPv4 Address. . . . . . . . . . . : 177.x.x.x
   Subnet Mask . . . . . . . . . . . : 255.255.248.0
   Default Gateway . . . . . . . . . : 177.x.x.x

so when I go to ipchicken with kali, It shows me a different ip than when I go there on windows, got it?
there is a way to do it, cause that’s what vms do and it works, we are almost there