Unwanted inconsistent masquerade when udp communication whilst starting

hi
I’m running some containers since a few weeks, thus might be user problem…

one of my containers runs a logstash for receiving netflows (so it is udp data). and sends it to another container on the same definded bridged docker net.

when I start the logstash container whilst the routers don’t send any flows and afterwards (when container is up abs running) start sending flows, I do see the “real” client ips of the routers sending the flow data as host.

when the routers keep on sending flow data whilst the container is stopped and later started whilst the routers are repeatedly sending udp datagrams, for all following udp datagrams from those routers sending whilst starting the container will be masqueraded to the bridge devices ip (e.g.172.18.0.1). when stopping those routers from sending and restarting still all input data from them is masqueraded. when sending data from another devices it won’t be masqueraded.

thus it looks like in general the iptables rules from docker are ok. nevertheless the effect is that it’s an inconsistent behaviour. at least i can reproduce and would look bit like a problem of statefullness though udp.

is there a way to configure a consistent behaviour?
or would it require some workaround (still thinking of how to properly s.th. like temporary prohibiting the relevant input in iptables input?)

Btw its running on debian bustier (kept up2date on stable rep (5.10.0-8-amd64 #1 SMP Debian 5.10.46-4)

ok as expected was an iptables problem.

when there’s already a connection track in the tables the wrong rules do trigger

when removing the items with conntrack everything is fine. will have to automate this on start of the container…