Docker Community Forums

Share and learn in the Docker community.

Docker networking problem

Hi guys,
I need some help from you.
I have a containerized java application that is used to talk with an external service via a proprietary ASCI protocol through an UDP datagram.
I have a 4 nodes docker swarm but this application is designed and configured to run (not as a service but like a simple container) on a specific node.
This application need to bind to a second NIC present on the docker node (in this example through one specific port (in this example 8001 UDP).

docker run -d \
           --name my-app-name \
           -p \

The application is working well, I can see that this container is linked to the bridge docker host network.

docker network inspect bridge

"Containers": {
    "42609ed7d2cbf070b559be15a24430b55e973bde2025b9d231526e618ef1db15": {
        "Name": "my-app-name",
        "EndpointID": "3d89031919ef4dcb2efc4a2d2136f5110dd165ba6459a8dac2ebb6ca6c0c35f8",
        "MacAddress": "02:42:ac:11:00:03",
        "IPv4Address": "",
        "IPv6Address": ""

If I try to link this container to an overlay network (in a swarm) for traefik

docker run -d \
           --name my-app-name \
           --network=traefik-public \
           -p \
           --label="traefik.enable=true" \
           --label="`my-app-name.{ip:.*}`)" \
           --label="" \
           --label="" \
           --label="" \
           --label="" \

because I need to expose an endpoint and I like how traefik works, my app is not able anymore to talk with the external service.
Where is the problem?
I’m not so expert in docker networking and so I’m, not able to understand what’s going on.
Could someone help me?
Where I can see to solve this situation?

Hope someone can help me.

Mr. Andrea

Troubleshooting a User-Defined Network
On the host where the frontend container is running, start a netshoot container reusing the network namespace affected container:

docker run -it --rm --network container:<container_name> nicolaka/netshoot
Steps 2 through 6 are executed inside this shell.

Look up all backend IP addresses by DNS name:

DNS names are created for containers and services and are scoped to each overlay network the container or service attaches to. Standalone containers user the container name for the hostnames. Looking up the name of a service returns the IP of the service’s load balancing VIP. To lookup the IP of each task created by a service, use “task.service_name” as the domain name.

For example, to lookup the IP addresses for a the backend service use:

nslookup tasks.backend_service_name


  • <backend_service_name> is changed to the backend service name
    Issue a netcat TCP test to each backend task IP address on the port where it should be listening:

nc -zvw2 $backend_ip <listening_port>


<listening_port> is changed to the port the backend tasks are listening on
To iterate over multiple task IPs for a service, you can use the following for loop:

for backend_ip in (nslookup tasks.backend_service_name 2>/dev/null \ | awk '/answer:/ {inanswer=1} inanswer && $1==“Address:” {print $NF}’);
nc -zw2 $backend_ip <listening_port>;
Note: Output is only expected for IPs where connections fail.

If no connections fail but requests submitted via the ingress network continue to have problems, move to the next section on troubleshooting the ingress network. If no connections fail, and the issue has only been seen container-container, then check another set of services or hosts until one task fails to reach another.

For any backend IP addresses reported as failed, do a reverse name lookup of the affected IP to determine its service name and task id:

nslookup <IP_address>
Results will be formatted servicename.slot.taskid.networkname

Exit the netshoot container and collect docker network inspect -v against the network between the two containers. Note the HostIP of tasks in the Services section that failed the netcat test.

On a manager, for the set of all failed service names and tasks, collect the following:

docker service ps <service_name>
docker inspect --type task <task_id>
For tasks that are still present (inspect --type task returns output), note their Created and Updated times. Collect Docker daemon logs covering this time frame from the netshoot host, all managers, and hosts of unresponsive tasks as identified by their HostIP.

If your Docker daemon logs to journald, for example:

journalctl -u docker --no-pager --since “YYYY-MM-DD 00:00” --until “YYYY-MM-DD 00:00”
Collect the output of from all hosts in the network path. This will expose and allow for the verification of the kernel programming if needed.

Great reply, was very useful.
I’ve found, with the nicolaka/netshoot image, that the network inteface I need to bind is the eth2 and not eth1 as I was thinking.
Thanks a lot.
Mr. Andrea