When containerizing VoIP applications, it is necessary to make it accessible from the outside of the app a huge number of UDP ports, in the order of 10k ports.
Using iptables NAT it is a simple matter of creating one rule that redirects an entire range of ports to an IP. However docker creates one rule for each port in the range, making it simply not viable to use it.
I have been trying a workaroud of not exposing the ports through docker, and simply creating the appropriate iptables rules directly, however I wonder if there is a better workaround which would survive server and container restarts.
If your application needs to listen on more than 10% of the possible port space, you should probably run it with
--net host and skip most of the Docker networking setup. (I wouldn’t expect the built-in NAT system or any sort of port-by-port forwarder to work well here, Docker or otherwise, and it doesn’t seem like you could ever run multiple copies of the application on distinct remapped ports.)
I understand the application of
--net=host in certain scenarios, but then I would lose a lot of automation. I don’t want to throw away the baby with the bath water… yet.
Why the NAT system wouldn’t work with a large range of ports being forwarded? The application doesn’t stay listening to these ports all the time, it simply needs a certain bloc of ports available for use.
It’s exactly my problem right now. Did you solve it?
I am using (trying) to use FreePBX with bridge configuration (and not host).
My experience is that anything over 1000 ports will cause errors like the following:
iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 50387 -j DNAT --to-destination 172.18.0.7:50387 ! -i docker_gwbridge: (fork/exec /sbin/iptables: resource temporarily unavailable)
Host networking seems like a common way to get around this, but it’s often not an option (swarm mode, etc.).