Published ports on tun2socks container are not accessible via host IP address

Hi,
I ran into a strange issue when I published ports using docker container in network container mode.
I am able to access via IP:PORT if container network mode is not used.
If I use container network mode, the ports are accessible via 127.0.0.1 and 0.0.0.0 but not IP:PORT.

I have given more details on how to replicate the issue here.
Initially I thought, it is related to this specific container, however it is happening with other containers also if I use container network mode.

Could you please let me know why this happens and how docker publishes ports when container network mode is used.

Thank you

In order to understand your situation, we need some context information.

Please share the output of following commands:

docker version
docker info

The exact docker run command, or if docker compose or docker stack deploy was used the content of the compose file. If a Dockerfile is used, please share its content as well.

When you share the outputs, always format your posts according to the following guide: How to format your forum posts

Thank you for the response.
In the previous comment, I have already mentioned the links for the scripts used.
The code is open source. The steps are mentioned in the discussion on how to replicate the issue. Please find the links below.

  1. Discussion Link with steps
  2. Internet Income Script mentioned in discussion link

Please find below the docker details.

  1. docker version output below.
Client: Docker Engine - Community
 Version:           26.1.4
 API version:       1.45
 Go version:        go1.21.11
 Git commit:        5650f9b
 Built:             Wed Jun  5 11:28:57 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          26.1.4
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.11
  Git commit:       de5c9cf
  Built:            Wed Jun  5 11:28:57 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.33
  GitCommit:        d2d58213f83a351ca8f528a95fbd145f5654e957
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  1. docker info output below.
Client: Docker Engine - Community
 Version:    26.1.4
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.14.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.27.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 184
  Running: 184
  Paused: 0
  Stopped: 0
 Images: 15
 Server Version: 26.1.4
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-112-generic
 Operating System: Ubuntu 22.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 5.783GiB
 Name:<Hiding this for Privacy>
 ID: 5d2307b3-4853-4d7e-98dc-7012d522ddd9
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

To summarize, we are using chrome browser directly and chrome browser through tun2socks container via container network mode. The former works with IP:PORT while the latter does not work.
Let me know if you need more info.

Thank you

Please forgive me, but I am usually not clicking any links to follow information, as I don’t want to spend time in checking whether I can trust a target link or not.

Furthermore, target links can break any time, which would make it impossible for someone having the same problem to actually gather all details and make sense of the topic.

Please share the content of the compose file and where the browser process is running (remote, host, inside a container?), and which url is used in the case that does not work.

Hi,
Thank you for the response.
The target links are from trusted website github. If you hover the mouse over the link, the url is visible on the bottom of the browser. Please make a note of this, if you wish to click on any links in the forum.
As it takes time for you to look into the open source code, I am giving you the docker run commands.
I thought you already know about this.

  1. Docker run command with direct connection for chrome browser. Publishing ports on the container here.
    The browser can be accessed via IP:7000 as published port is 7000.
    It can also be accessed via 127.0.0.1:7000 and 0.0.0.0:7000.
sudo docker run -d --name chrome --security-opt seccomp=unconfined -e TZ=Etc/UTC   -e CUSTOM_HTTPS_PORT=3201 -e CUSTOM_PORT=3200 --shm-size="1gb" -p 7000:3200 lscr.io/linuxserver/chromium:latest
  1. Docker run commands via tun2socks container.
    Publishing ports on tun2socks instead of chrome containers due to docker network mode design via container network.
    A random proxy was given below. It does not matter until you access the browser.
    The problem is accessing the browser via IP:PORT. Container network mode is used via tun2socks.
    It is accessible via 127.0.0.1:7000 and 0.0.0.0:7000 but not IP:PORT.
sudo docker run -d --name tun2socks -e PROXY=http://2.3.4.5:8000 -v '/dev/net/tun:/dev/net/tun' --cap-add=NET_ADMIN -p 7000:3200 xjasonlyu/tun2socks:v2.5.2
sudo docker run -d --name chrome --network=container:tun2socks --security-opt seccomp=unconfined -e TZ=Etc/UTC -e CUSTOM_HTTPS_PORT=3201 -e CUSTOM_PORT=3200 --shm-size="1gb" lscr.io/linuxserver/chromium:latest

You may access the url via browser directly using IP:PORT or use curl command to view the output on http://IP:PORT.

Apparently I failed to make myself clear about that I don’t want to. The information you share in your posts are the context I work with.

Luckily you shared all relevant information.

This looks good to me:

  1. You published the port on the tun2socks container
  2. You started the chrome container and hooked it into the network namespace of tun2socks.

I have no idea how tun2socks works. If we remove whatever it does from the equation (e.g. by using an image like alpine), the behavior should be exactly like you expect it: you should be able to reach chrome on docker host-ip:7000, the same way as in your first example.

However tun2socks works must be the reason, why the observed behavior deviates from the expected default behavior. For me, it appears rather like a tun2socks specialty, than a docker problem.

I am afraid that you will have to wait for someone who actually uses tun2socks.
In the meantime I would suggest raising an issue in the tun2socks GIthub repository.

I hope you find a solution!

Hi,
Thank you for the response. I tried using tun2proxy(which is similar to tun2socks but uses ubuntu) and it had the same problem.

So some network configuration in tun2socks and tun2proxy is causing this issue?
Usually if 0.0.0.0:PORT is accessible, it is accessible via IP:PORT but it is not the case here.

However, using this with standalone docker alpine works fine with IP:PORT. Please find below the commands used.

docker run -d --name alpi -p 7000:3200 alpine sh -c "while true; do sleep 3600; done"
sudo docker run -d --name chrome --network=container:alpi --security-opt seccomp=unconfined -e TZ=Etc/UTC -e CUSTOM_HTTPS_PORT=3201 -e CUSTOM_PORT=3200 --shm-size="1gb" lscr.io/linuxserver/chromium:latest

Please find below the docker file for tun2socks. Let me know if you can identify some configuration which is causing this.

FROM golang:alpine AS builder

WORKDIR /src
COPY . /src

RUN apk add --update --no-cache make git \
    && make tun2socks

FROM alpine:latest
LABEL org.opencontainers.image.source="https://github.com/xjasonlyu/tun2socks"

COPY docker/entrypoint.sh /entrypoint.sh
COPY --from=builder /src/build/tun2socks /usr/bin/tun2socks

RUN apk add --update --no-cache iptables iproute2 tzdata \
    && chmod +x /entrypoint.sh

ENV TUN=tun0
ENV ADDR=198.18.0.1/15
ENV LOGLEVEL=info
ENV PROXY=direct://
ENV MTU=9000
ENV RESTAPI=
ENV UDP_TIMEOUT=
ENV TCP_SNDBUF=
ENV TCP_RCVBUF=
ENV TCP_AUTO_TUNING=
ENV MULTICAST_GROUPS=
ENV EXTRA_COMMANDS=
ENV TUN_INCLUDED_ROUTES=
ENV TUN_EXCLUDED_ROUTES=

ENTRYPOINT ["/entrypoint.sh"]

An issue has already been raised to tun2socks as mentioned earlier and am awaiting response.

Thank you

You tested it with alpine and it worked, so its safe to assume the changed behavior is caused by something it does.

The Dockerfile shows iptables nd iproute is installed, so it could possibly interfere with the rules docker applies to iptables. Maybe the file entrypoint.sh provides some insights, but it very well be that the application itself triggers whatever causes this behavior.

I can’t help with any of this. I will leave this topic for someone who actually has experience with how tun2socks or tun2proxy work.

This might solve your problem

When Docker containers are run in the default (bridge) network mode, Docker handles port mapping between the container and the host, allowing you to access the container’s exposed ports using the host’s IP address and the specified port. This is why you can access your container using IP:PORT.

However, when you use the container’s network mode (--network="container:<name|id>"), the container shares its network namespace with another container you specify. This means it does not get its own IP address; instead, it uses the network stack of the target container. Because of this, Docker does not perform port mapping for containers in this mode, as it expects you to manage network access through the primary container whose network it shares.

Here’s why you’re seeing the behavior described

A. Access via 127.0.0.1 and 0.0.0.0: When you’re using container network mode, the container shares the network with another container, and thus, it’s as if you’re trying to access the services of the primary container itself. Accessing via localhost (127.0.0.1) or any interface (0.0.0.0) works because you’re essentially accessing the primary container’s network interface, which is bound to these addresses.

B No access via IP:PORT: Since there’s no port mapping done by Docker when using container network mode, trying to access the service using the host’s IP address and the port will not work. The port mappings you define for the container in container network mode are ignored because Docker expects you to manage network access through the primary container.

To resolve this issue, you have a few options

1, Use host network mode; for containers that need to be accessed via the host’s IP address. This makes the container use the host’s network stack, and the container’s services will be accessible via the host’s IP address and port. However, be cautious as this mode gives the container full access to the host’s network interfaces.
2, Use custom bridge networks; for better isolation and control. You can create a custom bridge network and connect multiple containers to it. This allows containers to communicate with each other using the internal IP addresses Docker assigns within this network.
For external access, you can map ports from one of the containers to the host.
3, Access services through the primary container, If you’re using container network mode to share network namespaces, ensure that any ports you wish to access externally are mapped through the primary container you’re sharing the network with.

*Below are some command that mighty command for creating a custom bridge network and running containers within *

# Create a custom bridge network
docker network create my-outbound-network

# Run containers within the custom network
docker run --network=my-custom-network --name my-containera -d my-image
docker run --network=my-custom-network --name my-containerb -d my-image
# -d inbackground run

# Map ports for external access on one of the containers
docker run --network=my-custom-network -p 8080:80 --name my-container3 -d my-image

#This setup allows containers to communicate internally and also enables external access to specified services via port mapping.

*Thanks for spendind your time in reading heheh*

Thank you for the response. Custom networks have already been tried.
If you look at the recent comments made, the problem is with tun2socks container. Using with alpine image works fine.

The following command line uses only Alpine image and works with IP:PORT.

docker run -d --name alpi -p 7000:3200 alpine sh -c "while true; do sleep 3600; done"
sudo docker run -d --name chrome --network=container:alpi --security-opt seccomp=unconfined -e TZ=Etc/UTC -e CUSTOM_HTTPS_PORT=3201 -e CUSTOM_PORT=3200 --shm-size="1gb" lscr.io/linuxserver/chromium:latest

The following command line uses tun2socks container and does not work with IP:PORT but is accessible via 127.0.0.1:PORT and 0.0.0.0:PORT. So there is something related to network configuration or network tools installed inside the tun2socks container.

sudo docker run -d --name tun2socks -e PROXY=http://2.3.4.5:8000 -v '/dev/net/tun:/dev/net/tun' --cap-add=NET_ADMIN -p 7000:3200 xjasonlyu/tun2socks:v2.5.2
sudo docker run -d --name chrome --network=container:tun2socks --security-opt seccomp=unconfined -e TZ=Etc/UTC -e CUSTOM_HTTPS_PORT=3201 -e CUSTOM_PORT=3200 --shm-size="1gb" lscr.io/linuxserver/chromium:latest

An issue has already been posted to tun2socks and am awaiting response from the author.

Thank you

1 Alpine Container Behavior; The Alpine container (alpi) works as expected because it’s a straightforward Linux environment without specialized network handling. When you map ports using -p 7000:3200 and run another container (chrome) with --network=container:alpi, the chrome container shares the network stack of the alpi container. This setup works fine because there’s no complex network manipulation happening within the alpi container.

Tun2socks Container Behavior; tun2socks is designed to intercept TCP connections and redirect them through a SOCKS proxy. It involves manipulating the network stack and potentially setting up custom routing rules within the container. When you run the tun2socks container with -p 7000:3200 and then run the chrome container with --network=container:tun2socks, the chrome container inherits these complex network configurations.The key difference here is how tun2socks manipulates the network. It’s likely that tun2socks is configured in a way that interferes with or overrides the Docker-managed port mappings, or it sets up the network in such a way that external access to mapped ports doesn’t work as expected. This could be due to custom routing rules, IP tables, or other network configurations applied by tun2socks.

I know how frustrating it can be waiting, you can follow suit to debug the container network or combine configuration with chrome.

Investigate tun2socks Network Configuration; Look into the tun2socks container’s network configuration, focusing on any custom routing, firewall rules (iptables), or network namespace configurations it might be applying. Adjusting these settings or understanding their implications might help resolve the access issue.

Use a Different Approach for Network Sharing; Instead of using --network=container:tun2socks, consider connecting both the tun2socks and chrome containers to a custom bridge network. This approach requires ensuring that tun2socks properly routes traffic for the chrome container through the SOCKS proxy without directly sharing the network namespace.

Debugging Network Traffic; Use network debugging tools (tcpdump, traceroute, ip route, iptables -L) within the tun2socks container to understand how traffic is being routed and why external access might be failing. This can provide insights into whether the traffic is being dropped, misrouted, or otherwise manipulated in a way that prevents access via the host’s IP and port.

Debugging network traffic within a Docker container

1. Install Network Utilities in Your Container

First, ensure your container has the necessary tools for debugging network traffic. Common tools include tcpdump, net-tools, iproute2, and traceroute. If these tools are not present in your container, you can install them. with following commands depending on the operating system.

# Alpine-based container, you can install `tcpdump
apk add --no-cache tcpdump
#Debian/Ubuntu-based containers
apt-get update && apt-get install -y tcpdump

Use tcpdump to Capture Traffic

tcpdump is a powerful command-line packet analyzer. To capture traffic on all interfaces within the container, you can use

tcpdump -i any
#To capture and save the traffic to a file for later analysis use the command.
tcpdump -i any -w /path/to/capture_file.pcap

Analyze Traffic with ip and netstat

Check IP addresses and routes; Use ip addr to list IP addresses assigned to interfaces and ip route to show routing tables.
Check open ports and connections; netstat (or ss in newer distributions) can be used to display open ports and existing connections.

netstat -tuln

Use traceroute to Trace Route to a Host

If you’re debugging connectivity issues to a specific host, traceroute can show the path packets take to reach the host.

traceroute example.com

Debugging from the Host

Sometimes, it’s useful to observe the container’s network traffic from the host:

List Docker networks: Identify the Docker network your container is using.

docker network ls

Inspect network details: Get detailed information about the network, including which containers are attached.

docker network inspect network_name

Capture traffic on Docker bridge: Use tcpdump on the host to capture traffic on the Docker bridge interface (e.g., docker0 or a custom bridge).

sudo tcpdump -i docker0

Use Wireshark for GUI Analysis

For a graphical interface to analyze the captured .pcap files, you can use Wireshark on your host machine. Transfer the .pcap file from your container to your host and open it with Wireshark for in-depth analysis.