What I want:
Write data, from inside a container, to a UDP port and be able to run host-side program to read data off this port for onward host-side processing.
What i see:
After running the container, when I execute the host-side program, I am unable to bind to the UDP port since the docker-proxy program has created a binding to the port: socket.error: [Errno 98] Address already in use
I have confirmed this by switching the execution order of the container and host-side program, where host-side program binds to the port just fine and the ‘docker run’ command fails since it now cannot bind to the UDP port.
Dockerfile line (relevant): EXPOSE 8888/udp
Docker run (relevant): docker run -p 8888:8888/udp
So the question is: how do I create / run a docker container such that I am transmitting data, from inside the container, across a UDP port, for reading by a program host-side?
hi, thanks for the response, but I don’t understand this, since:
In this scenario, the client is inside the container and the server is running host-side. In other words, datagrams are going from inside the container to the host.
But it’s the server that must bind to the port to receive data from the client.
So how can the bind take place from inside the container? UDP clients do no binding to a port.
The problem, as I see it, is that the docker run program also executes a bind on the same port that is being forwarded through Docker. So when Docker has already done this bind, the host-side server program fails since a port can be bound only a single time.
I think the question is: how do you expose a port for UDP purposes when the direction of the datagrams are going from inside the container to the outside?
Or am I thinking about this completely backwards or wrong? If so, can you be more explicit in how this should be set up?
So I tried to switch the concept around, like you suggest, where I do the bind inside the container, effectively switching the concept of server and client. All goes well inside the container.
But then, when I try to write a program host-side, effectively acting as a client, this doesn’t work since: because I have nothing to bind to, there is no host:port combo for me to read data from.
The whole point of my design is so that my program, running inside the container, can write data to a port that an end-user can connect to to read the data from.
When I do this directly on a machine, this works. When I dockerize the client-side, it no longer works since now I can’t bind to the port host-side since the docker-proxy has already done the bind itself to expose the port from inside the container to the host.
I am coming to the conclusion that this is an unintended consequence of port forwarding in Docker and that this is not possible.
Opening ports is only needed when you want to Listen for the requests not sending. By default Docker provides the necessary network namespace for your container to communicate to the host or outside world.
So, you could it in one of two ways:
1] Use --net=host in your docker run and send requests to the localhost port. In this case your containerized app is effectively sharing the host’s network stack.
2] Talk to the container network gateway (which is usually 172.17.0.1) or your host’s hostname from your container. Then you are able to send the datagrams to your server program running on your host.
In my case, I went for option 2 where:
My containerized program acts as a UDP data sender, writing its data to 172.17.01:8888 (written in C).
Back on the host, I have a simple python program binding to 172.17.01:8888, acting as the data receiver:
import socket
host = "172.17.0.1"
port = 8888
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM | socket.SO_REUSEADDR)
s.bind((host, port))
print "waiting on port:", port
while 1:
data, addr = s.recvfrom(1024)
print data
Importantly, neither the Dockerfile (via EXPOSE command) nor the docker run command (via -p option) refer to this UDP port at all.
and you COULD have added the expose to the container dockerfile,
and THEN used docker inspect running_container_id on the host to determine the port and IP address of the container,
and passed those to your python script instead of hard coding it.
not really, since both IP’s and ports to send the UDP data to are configurable from the host by the end-user (who don’t have access to inside the container). that is: any running container will broadcast data to different IP’s and/or ports. the idea is that the end-user can configure multiple IP’s and ports to have the UPD data forwarded to.
it is not so that the UDP destination is hard-coded from inside the container, nor that it will be sending only to a single location. so since the user is in full control of how the inside of the container is configured, it is also up to them to configure their programs host-side to pick up the broadcast data.
I am facing similar issue.
The information present here will be helpful to fix my current issue. @ivor50: is the container running in a bridge network?