How to set up a container just like a virtual machine in bridge mode? (meaning, the container gets its own external ip)

busybox is a tiny linux image that can do many things.

in this case I am using it to run the udhcpc app to request an ip address from the dhcp server for the specified mac address (that we generated)… and the busybox container ends.

then with the ip address and mac address we can set our app container, at docker RUN time, to the right network and address…

all this of course depends on the busybox container hearing the dhcp server responses…
i had this same problem with pipework long ago. but never ran vmware on windows at the time.

vmware (12) windows is NOT supposed to support network promiscuous mode. and the hack i found
was not already specified… SO… it should be off. but busybox hears the dhcp server response…

on my physical Ubuntu 14.04 box this all works fine.

now, starting vmware on that system, I cannot get busybox to work inside the Vmware ubuntu virtual machine.

I do NOT want static addresses and do NOT want guessing at what is available… (link docker run --net=network does, blindly using the next address in the docker network…

ok, some parts still confuses me. I´ll try to explain:

  • so, we are using a busybox container to trigger the dhcp and receive an ip address (not static, no guessing, thats sweet)
  • why can’t we do that directly with the container we want or couldn’t it be done directly from the host? because, once we create the macvlan network, a new interface appears in ifconfig. although we cannot specify it’s mac address upon creation, we can change that later using “ip link set dev addr 00:01:02:aa:bb:cc”

so, why not run, from host -> dhcp -i
and later launch a docker that connects to that interface?

the problem seems to be:

  • setting up a virtual nic card in linux that can receive ip from dhcp, directly from physical connection
  • once that’s done, just run the docker container attached to that interface

does that make sense?

been a lot of similar questions for me,. but the et is you cannot CHANGE to docker assigned IP address later.
the busybox doesn’t TRY to change its address, just get the info.

we CAN specify the address and mac address on docker run now, with the docker networking support…

so, we create a macvlan, with the right address range and gateway

we use the network in the docker run… but docker will ALWAYS assign .2 for the 1st (on every host that creates the network)
so now you have collisions…

now we need a GOOD ip address and mac at docker RUN time.

all the stuff u mentioned is what pipework did. which doesn’t work now. they took away some support when they added docker networking…

ok, so I´ll focus on the error during step 3

what is causing it? how could we test if this machine has access to the network?
is the discover reaching the outside?

or the problem is when it comes back?

things to try… start an ubuntu container in the network created in step 1. -it mode…
see if you can ping some other system on your network… this is Docker assigning the address…

(I think it will fail)…

generally this is a networking issue…

the host adapter has mac address xxx:1
the container has mac address xxx:2

when the dhcp request is sent, the interface only has a mac address, and the DHCP server sends the response back to that mac.

except, the host is listening for mac xxx:1, so it ignores the response to xxx:2 as it is not destined for the host adapter.

promiscuous mode says, let me listen for EVERYTHING (even stuff not really destined for me) and then decide …
this is the security hole… in promiscuous mode you could see ALL traffic… (so you COULD be a hacker)

yes, I understand promisc mode.

I tested it. The container received an ip inside the expected external range (187.x.x.2) but I could not ping anything external (, for instance)

I created another container in the same network, also received an external ip (187.x.x.3) I could ping the other container from it but could not ping external

what else is on your network in the 187.x.x.x network… try to ping one of those… like the gateway

i just did the same, and was not able to ping for 4 tries, now it works… same container over 10 minutes

have nothing else on this network, tried to ping the gateway but no game

when you do ifconfig, is there a ip assigned for the dm-somehting interface? (the macvlan interface)

outside the container, no

inside the container, no… just the lo and eth0 interfaces
from this docker run command
docker run -it --net ournet --cap-add NET_ADMIN --mac-address 02:6d:9d:0b:c0:fd --ip ubuntu

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever

24: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:6d:9d:0b:c0:fd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet scope global eth0
       valid_lft forever preferred_lft forever

I run, but changed the ip to fit my range

docker run -it --net ournet --cap-add NET_ADMIN --mac-address 02:6d:9d:0b:c0:fd --ip 187.x.x.244 alpine:3.4 /bin/sh

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
   valid_lft forever preferred_lft forever
290: eth0@if287: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
link/ether 02:6d:9d:0b:c0:fd brd ff:ff:ff:ff:ff:ff
inet 187.x.x.244/22 scope global eth0
   valid_lft forever preferred_lft forever

same behavior - can’t ping external or anything other than the other containers

and u can ping the gateway normally, and the GW is specified for the network create right?

nooo…sadly I cannot ping the gateway

PING 187.x.x.1 (187.x.x.1): 56 data bytes

--- 187.x.x.1 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

these ips, even tho they look like the external ones, are just placebo
i´m still locked into some kinda of internal network, need to break free (bass riff plays)

yes, same gw as the one obtained with the ip_address_for_network script

oh, I wanted to answer one question… if the busybox can dhcp why can’t a normal container…

well, busybox executes the dhcp command and protocol, BUT, it cannot set its ip address either…
that function is not available… can only set thru the docker run or docker connect commands (which override the default mechanism)… docker connect cannot set the mac address…

so, let me see if I understand

  • if you run ‘docker run’ and tell the container to run udhcpc, for instance, that would not work cause the container ip would have already been set

  • that’s why we do the busybox before, so it prepares the lease and we know before hand the correct ip that needs to be assigned during ‘docker run’ - since it’s the only time when we can assign it, cause once the container is created its not possible to change its ip - is that corrrect?

unfortunately, I cannot confirm if that works or not, because the dhcp is not getting any answer
I starting to suspect its got something to do with kali linux not being able to bridge properly
even without containers, if I try to create a bridge network, I cannot get ip working on it

correct on the 1st part…

I have the following

1 physical - works
1 vmware on physical (above) does not work

1 physical - windows
1 vmware ubuntu - works
1 vmware mac - does not work

on the systems that do NOT work, I can docker run -it --net ournet --mac-address mmmm -ip ppppp ubuntu
and I can ping the gateway and the naeserver at (on the other side of my gateway), but cannot ping anything else on this local network

on the systems that work, I can ping anything without problem

yes, I’m creating a new vm with debian on it
gonna test that
must be kalis fault
hold on and thank you

Is your actual goal here to simulate a network test environment, or just to run containers that can be seen from the network?

Reading through the thread it sounds a lot like you’re just trying to run containers and get access to them. The normal way to do this is to use docker run -p 7777:8888 to map port 7777 on the host to port 8888 in the container, and then have external processes connect to your host’s (or in your case your VM’s) port 7777 (the two ports can be and frequently are the same). Docker provides a NAT-based environment where this just works; containers don’t need to run DHCP clients or ever really worry about what their own IP addresses are. A container’s IP address will be on a private network, not the network the host is plugged into, and that’s okay.

Is there a reason this standard setup won’t work for you?

in my case, I have an application I want to run multiple instances of, and it doesn’t play nice with ports, doesn’t allow NAT, has a private protocol that embeds the source IP address in the DATA packet, and the receiver opens a NEW connection back to the source on a hard coded port…

what a pain in the butt…

but this app does api simuluation/virtualization, and as part of a test run, i would spin up N number of these to simulate the apis being used… and other teams could be doing the same… I don’t want anyone to have to coordinate with each other on fixed target systems…(we have that mess now) I was able to build this on my local hardware & linux in a matter of a couple weeks (learn docker, learn app comandline, design docker container, build startup scripts, add pipework, … add to jenkins test jobs,

I need a class A network, dhcp (so short lease time), a farm of docker hosts and some army of docker containers. (both elastic) spinning up a VM takes too long…and is very heavy weight for this 5 minute test cycle. this is not swarm, as each container, while running the same software, will be executing a different api simulation.

but when it came time to run this on work systems (all virtual) or AWS (moving to testing in the cloud). .it all fell apart…
SO close… we need a network driver that can listen for multiple mac addresses, which are set dynamically.

another fun project to add to the list!..

no, I want this: have unique external ip on containers. just like host has one, which can access and be accessed from external network.

no port mapping, pls.

quick update. tried on debian VM, same results, no game.

gonna install ubuntu via dual boot, direclty on the machine to see what’s the outcome will be.