UDP inbound connection not being properly NATed

Hello everyone.

Is it possible that an UDP connection gets stuck in a state where it would not be properly routed to a container, necessitating a server restart?

In detail: We containerized a legacy application which listened on a particular UDP port, which only one of our customers used to send data. We prepped up a new server with this app inside docker, using docker-compose.yml for port mapping, and then just switched the IP addresses. However from then on, only new connections to this port reached the same internal port in the container, but our customer packets would not be forwarded. I could see with tcpdump the packets reaching the host, but a tcpdump inside the container showed nothing.

Then I added two iptables rules to log the UDP inbound packets: one just at the beginning of the INPUT chain, and one at the beginning of the FORWARD chain. On syslog, my customer traffic showed as IN=eth0 OUT= , while new connections I made to that port showed directly in the FORWARD log with IN=eth0 OUT=docker1. Also if I checked /proc/net/ip_conntrack I could see the connection from my customer with [UNREPLIED] and not being forwarded to the container, meanwhile my connections would show up properly, but I could also see that the connection on ip_conntrack was not ever closed - before the timeout new packets arrived which would refresh it. In the end not even service network restart did the trick; we restarted the whole server and from then on the packets would begin to arrive at the container.

Am I correct in thinking that, as soon docker creates its own things on iptables, new UDP packets arriving would obey those rules? Or is this true only if they come from connections established after this moment?