Docker Community Forums

Share and learn in the Docker community.

Reaching secondary IP via eth0 from within docker fails

After trying for days I hope someone can help me out over here. I would like to connect a docker to the secondary IP on my virtual host, where the applications in the docker use eth0.

The good news is that a ping outside my docker to the secondary IP works (‘ping -I 107.233.216.241 www.google.com’).

I’ve setup a user network (macvlan) as well using:

docker network create -d macvlan --subnet=107.233.216.241/24 --gateway=107.233.216.254 my-macvlan-net

PS: I’m not completely sure about the gateway, but I used extension 254 because my main IP uses the same gateway, and if I ommit this parameter it is set to extension 1, which doesn’t work either.

Then I connected via:

docker run --rm -dit --network my-macvlan-net --name my-macvlan-alpine --ip 107.233.216.241 alpine:latest ash

Now a ping to Google from within the docker yields:
ping: bad address ‘www.google.com

(note that a ping to an existing IP just doesn’t give data back)

An ‘ip a’ from within the docker gives:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
117: eth0@if116: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:6d:ed:d8:f0 brd ff:ff:ff:ff:ff:ff
inet 107.233.216.241/24 brd 107.233.216.255 scope global eth0
valid_lft forever preferred_lft forever

And an ‘ip route’ from within the docker gives:

default via 107.233.216.254 dev eth0
107.233.216.0/24 dev eth0 scope link src 107.233.216.241

Any tips to help me out?

In this example, you start two different alpine containers on the same Docker host and do some tests to understand how they communicate with each other. You need to have Docker installed and running.

Open a terminal window. List current networks before you do anything else. Here’s what you should see if you’ve never added a network or initialized a swarm on this Docker daemon. You may see different networks, but you should at least see these (the network IDs will be different):

$ docker network ls

NETWORK ID NAME DRIVER SCOPE
17e324f45964 bridge bridge local
6ed54d316334 host host local
7092879f2cc8 none null local
The default bridge network is listed, along with host and none. The latter two are not fully-fledged networks, but are used to start a container connected directly to the Docker daemon host’s networking stack, or to start a container with no network devices. This tutorial will connect two containers to the bridge network.

Start two alpine containers running ash, which is Alpine’s default shell rather than bash. The -dit flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Since you are starting it detached, you won’t be connected to the container right away. Instead, the container’s ID will be printed. Because you have not specified any --network flags, the containers connect to the default bridge network.

$ docker run -dit --name alpine1 alpine ash

$ docker run -dit --name alpine2 alpine ash
Check that both containers are actually started:

$ docker container ls

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
602dbf1edc81 alpine “ash” 4 seconds ago Up 3 seconds alpine2
da33b7aa74b0 alpine “ash” 17 seconds ago Up 16 seconds alpine1
Inspect the bridge network to see what containers are connected to it.

$ docker network inspect bridge

[
{
“Name”: “bridge”,
“Id”: “17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10”,
“Created”: “2017-06-22T20:27:43.826654485Z”,
“Scope”: “local”,
“Driver”: “bridge”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “172.17.0.0/16”,
“Gateway”: “172.17.0.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Containers”: {
“602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c”: {
“Name”: “alpine2”,
“EndpointID”: “03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd”,
“MacAddress”: “02:42:ac:11:00:03”,
“IPv4Address”: “172.17.0.3/16”,
“IPv6Address”: “”
},
“da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168”: {
“Name”: “alpine1”,
“EndpointID”: “46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5”,
“MacAddress”: “02:42:ac:11:00:02”,
“IPv4Address”: “172.17.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
},
“Labels”: {}
}
]
Near the top, information about the bridge network is listed, including the IP address of the gateway between the Docker host and the bridge network (172.17.0.1). Under the Containers key, each connected container is listed, along with information about its IP address (172.17.0.2 for alpine1 and 172.17.0.3 for alpine2).

The containers are running in the background. Use the docker attach command to connect to alpine1.

$ docker attach alpine1

/ #
The prompt changes to # to indicate that you are the root user within the container. Use the ip addr show command to show the network interfaces for alpine1 as they look from within the container:

ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever