VM on docker host unable to communicate to containers

I have a physical host that hosts VMs and containers off the same hardware. The host is a running a slackware base - unraid if anyone is familiar.

I have found that if I want a VM on br2 to communicate to a bridged container on br0, both within the same subnet/vlan or with either in separate vlans, I get tcp resets.

This is already confirmed not a firewall issue. I know that there are restrictions built into docker that will not allow for containers to talk to the host directly, however, this vm (although running off the same host) has its own IP and, when in a different vlan, its own L3 domain. So, in my mind, the connection should act just as any other client attempting to reach the container.

The VM can also ping the host ip regardless of which vlan it is in. So I know the general routing to the host is intact.

Docker version 17.09.1-ce, build 19e2cf6

ab41a36062a5        bridge              bridge              local

[
{
    "Name": "bridge",
    "Id": "ab41a36062a54c5af4f6a2e4f225b8e4e1bd47c27cc4469e6d3f90e2f040a629",
    "Created": "2018-10-08T22:31:13.686387237-04:00",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
        "Driver": "default",
        "Options": null,
        "Config": [
            {
                "Subnet": "172.17.0.0/16",
                "Gateway": "172.17.0.1"
            }
        ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
        "Network": ""
    },
    "Options": {
        "com.docker.network.bridge.default_bridge": "true",
        "com.docker.network.bridge.enable_icc": "true",
        "com.docker.network.bridge.enable_ip_masquerade": "true",
        "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
        "com.docker.network.bridge.name": "docker0",
        "com.docker.network.driver.mtu": "1500"
    },
    "Labels": {}
}

Thoughts?