Docker Community Forums

Share and learn in the Docker community.

Overlay network problem with openstack / vmware setup


(Hmaeck) #1

I’m having the following problem:

I have provisioned an esx server with a docker host (using docker machine). The esx
docker host is my UCP swarm controller. Then I wanted an additional node and
decided to add a vm hosted by devstack/openstack to my swarm cluster.

The IP address of the ESX host is 192.168.123.14

The IP address of the Devstack host is 10.11.12.5 I attached a floating IP address to
the devstack host (192.168.123.1). I made sure I could ping from the ESX docker
host to the vm on devstack. This works just fine. I think devstack provides a
router to be able to use these floating IP addresses.

Then I installed Docker on the vm and after that I created an overlay network in Docker.

I launched a container on the esx host and attached it to the overlay network. Then I
launched a container on the devstack vm docker host and attached it to the same
overlay network. However when I do ‘ping nameoftheothercontainer’ it returns ‘destination
host unreachable’. It should be noted that the name resolution works, because
it finds the IP address from the container name.

Can somebody explain me how I can get this working and what the problem might be?

p.s. The overlay network works, because I’ve multiple esx hosts connected to the switch
and I’m able to ping to other containers residing on a different host (via the
overlay network).

Here you
can see a quick drawing I’ve made of the situation:

This is the output of the ifconfig command o nthe devstack/openstack host:
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:f6:01:6e:c2
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:f6ff:fe01:6ec2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9370 errors:0 dropped:0 overruns:0 frame:0
TX packets:8877 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8006911 (7.6 MiB) TX bytes:1183859 (1.1 MiB)

docker_gwbridge Link encap:Ethernet  HWaddr 02:42:f3:da:0b:14
          inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:f3ff:feda:b14/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:55 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3696 (3.6 KiB)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr fa:16:3e:47:cb:dc
          inet addr:10.11.12.5  Bcast:10.11.12.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe47:cbdc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:91488 errors:0 dropped:7 overruns:0 frame:0
          TX packets:70318 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:128496234 (122.5 MiB)  TX bytes:22719257 (21.6 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vethd8084c5 Link encap:Ethernet  HWaddr 22:1d:c9:1c:96:8d
          inet6 addr: fe80::201d:c9ff:fe1c:968d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3124 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2847 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:274938 (268.4 KiB)  TX bytes:359803 (351.3 KiB)

vethe59a1d0 Link encap:Ethernet  HWaddr 3e:85:c7:1e:2f:fe
          inet6 addr: fe80::3c85:c7ff:fe1e:2ffe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6246 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6055 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:7863153 (7.4 MiB)  TX bytes:825966 (806.6 KiB)

Edit:
I also checked the following:
From the esx container, I can ping to 192.168.123.14 and 192.168.123.1
From the devstack container I can ping to 192.168.123.14 and 192.168.123.1

The docker container on devstack got assigned the following interface:

veth1f7e958 Link encap:Ethernet  HWaddr b2:1d:46:ba:f3:1d
          inet6 addr: fe80::b01d:46ff:feba:f31d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1663 (1.6 KiB)  TX bytes:2030 (1.9 KiB)

(Hmaeck) #3

When I run tcpdump on the docker container residing on the devstack host, and try to ping to the container residing on the esx host, I get:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
    04:28:40.538934 ARP, Request who-has orca.UnderTheSea tell duck.local, length 28
    04:28:41.537855 ARP, Request who-has orca.UnderTheSea tell duck.local, length 28
    04:28:42.537868 ARP, Request who-has orca.UnderTheSea tell duck.local, length 28
    04:28:43.558352 ARP, Request who-has orca.UnderTheSea tell duck.local, length 28

So obviously, ARP is not working to the other host. But what’s the point of the overlay network if it doesn’t work across routers? Isn’t it possible to create an overlay network with i.e an amazon ec2 instance and an azure vm? Because that’s what I would like to do in the end.

NOTE: duck is the name of the container on the devstack vm
orca is the name of the container on the esx host