Best method to connect containers to non-Docker VM. (OpenVSwitch?)

I’m having an issue with connecting multiple virtual machines to one virtual network, and am hoping to get some guidance. Here’s our setup:

2018-08-10%2014_33_29-unnamed0%20-%20yEd

We have 2 virtual machines.
Both Virtual machines are running on the same VirtualMachine Host, running OpenVSwitch

“Virtual machine Docker Host” (VM-DH) – and it is a docker host running OpenVSwitch
This virtual machine has two containers running
Docker Container #1 (DC1) (ip 192.168.0.10)
Docker Container #2 (DC2) (ip 192.168.0.11)

“Virtual machine 3” (VM3) is a (non-Docker) Debian Virtual machine.
The Problem:
We can get DC1 to ping DC2,
DC2 to ping DC1
How can we get DC 1 and DC2 to ping VM3?
or VM3 to ping DC1 or DC2?

We would ALSO like DC1, DC2, and VM3 to all be able to access the internet, but NOT the other machines on the “management network” of 10.64.0.0/22

We can have ANY IPs and network scope assigned to ANY of the Containers and Virtual Machines…
… as long as the VM’s and DC’s can communicate, and
… access the internet
… but NOT access the physical Virtual Machine Host or Docker Host network IPs

We started trying with OpenVSwithch – but any solution that accomplishes the result, we’re willing to do.
Thoughts?

I struggled with the same, and there is a firghtening lack of information on the docker OpenVswitch topic.

I ended up getting it to work by using the same OVS bridge with VLANs. One difference I have from you is I deployed a PFsense firewall VM (which could take the place of the VirtualMachine#3 in your diagram) which handled routing for me. KVM has hooks and supports OpenVSwitch, Here is an outline of the basics:

openvswitch 2.12.0
host box is centos 7. using KVM as the hypervisor
ovs1 is my overall OVS parent bridge
eth1 on my host box gets attached to ovs1
Interface configurations:
/etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=ovs1
ONBOOT=yes
NAME=eth1
DEVICE=eth1
BOOTPROTO=none
DEFROUTE=no

/etc/sysconfig/network-scripts/ifcfg-ovs1 (parent openvswitch)
DEVICE=ovs1
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=10.1.4.20
PREFIX=24
DEFROUTE=yes
GATEWAY=10.1.4.1
ONBOOT=yes
DNS1=10.1.3.11
DOMAIN=testdomain.com


(in below xml file, prepended < characters replaced with # to facilitate pasting into the docker hub web form field) take note that VLAN4 in the below .xml configuration does NOT have a VLAN attached. This will be the native VLAN on your switch for your host uplink (i.e. all untagged frames from OVS will ride on VLAN4)
/etc/sysconfig/network-scripts/ovs1.xml
    #network>
     #name>ovs1#/name>
     #forward mode='bridge'/>
 #bridge name='ovs1'/>
 #virtualport type='openvswitch'/>
 #portgroup name='VLAN3'>
   #vlan>
     #tag id='3'/>
   #/vlan>
 #/portgroup>
 #portgroup name='VLAN4'>
 #/portgroup>
 #portgroup name='VLAN5'>
   #vlan>
     #tag id='5'/>
   #/vlan>
 #/portgroup>
 #portgroup name='VLAN25'>
   #vlan>
     #tag id='25'/>
   #/vlan>
 #/portgroup>

#portgroup name=‘TRUNK’>
#vlan trunk=‘yes’>
#tag id=‘3’/>
#tag id=‘4’/>
#tag id=‘5’/>
#tag id=‘25’/>
#/vlan>
#/portgroup>
#/network>

import the xml file network configuration into KVM.

virsh net-define /etc/sysconfig/network-scripts/ovs1.xml

then restart your KVM network (virsh net-destroy ovs1 and virsh net-start ovs1). shutdown VMs. when you edit the network configuration for the VMs, select ovs1 as the network and a drop-down for portgroup will appear, select the VLAN you want the VM to go on.

for the docker containers:

make a “fake” OVS bridge (ovs25)
ovs-vsctl add-br ovs25 ovs1 25 (25 being the VLAN that the fake bridge will tag all frames with)
mes with)
use docker run --net=none to create your docker containers
for the docker containers, use the ovs-docker command to attach a new interface to the fake bridge above AFTER the container is started and running. you need to do this every time you start the container

ovs-docker add-port ovs25 eth0 container --ipaddress=10.25.0.10/24 --gateway=10.25.0.1


 I then used the “trunk” port created in OVS above to create PFSense firewall subinterfaces that served as the gateways for each OVS VLAN.