Ipvlan, macvlan ... best way for client on host to connect with server in container

Such a difficult topic to get answers for. I posted two similar issues in github (267, 308), but see other similar issues still open after years.

I am well aware of the isolation philosophy Docker employs for containers, however I’m seeing inconsistencies. An nginx container running the Alpine OS doesn’t have this problem, I can open a connection to it from the host without an issue.

I have a server in a standalone docker container and I need a client on the host to connect to it. The host runs an nginx server to proxy websocket connections to a websocket server in a container. I should be able to run multiple containers and load balance between them, with nginx on the host routing requests to different containers depending on the load.

I know this could be done with compose and moving nginx into a container as well, except the websocket server processes also have a python program that needs to reside on the container with the websocket server.

What can I do to allow host-only communication to containers? It should be as simple as -p 127.0.0.1:9992:9992 but it doesn’t work. Although docker-proxy seems to map ports correctly, host client traffic never reaches the server in container.

You should launch the container with the docker run -p option to publish it on the host’s IP address. Then the client should be configured to talk to a DNS name for the host (or localhost if you can guarantee that both parts are on the same physical system) and the specified port. [You already know this, but I’ve scrolled past enough macvlan posts tonight…]

If you use docker run -p127.0.0.1:12345:80, then the two parts must be on the same physical system, and the client must be configured to reach the server at localhost:12345 (forwarding to the normal HTTP port 80 inside the container) (or 127.0.0.1 or ::1, but not any other DNS name or IP address).

Regardless of what you use with docker run -p, the process inside the container generally must bind(2) to 0.0.0.0. If it binds to 127.0.0.1 it will be inaccessible to anything other than processes in the same container (or in the special case of Kubernetes, other containers in the same Pod).

In this issue, your tcpEchoTest script tells the tcpEcho.py script to only listen on connections from 127.0.0.1, but it will see inbound connections from anywhere routed via Docker coming from somewhere like 172.17.0.1 (even if they’re from the same physical host) and ignore them.

A full reproduction setup like you have in this issue is very useful for helping others understand your setup.

You might look into netcat as a prebuilt tool for very simple TCP clients and servers.

If what you said above is accurate, what you’re telling me is that it isn’t possible to accomplish what I am trying to do. The 0.0.0.0 host IP address will NOT prevent connections from sources external to the host, and that is why I can’t use 0.0.0.0. It is also what is leading me to consider macvlan or ipvlan as alternatives, which use a different IP address on the host. Isolation in that case is provided on the host.

I am not an IP / network expert or one that understands networking in great detail (for example I don’t comprehend iptables entries), but I can usually work my way thru most networking issues with a little help from google searches.

As I see it I have 2 options:

1. Figure out how to implement an ipvlan (not as much info on ipvlan as there is for macvlan, and ipvlan looks like a special case of and subordinate to macvlan) using a virtual IP address bound to a host interface

2. Come up to speed on docker-compose and re-engineer the entire container system

I ultimately intend to employ the 2nd option, but wanted to start with a single container approach first as an intermediate migration path from the existing non-dockerized system now in production.

I estimate a few weeks of learning and testing will be required to do the 2nd option, whereas the first is now working except the websocket proxy. So close but perhaps not.

I see quite a few post similar questions raised concerning host → container networking, yet I see no tutorials or explanation on how to accomplish this. I also see no explanation on why some images such as the nginx alpine image allow for host to container connections while NOT allowing external connections external to the host. If it’s possible for that image why isn’t it for others? Why does docker run -d --name test -p 127.0.0.1:8080:80 nginx work and allow only the host to connect if what you said above is true?

Although it may simply be a “corner case” not explicitly excluded, it makes no sense to allow a -p127.0.0.1:xx:xx run argument if the container treats that as “my localhost/container only” and not as a host IP:port. No point in using -p if that’s the case as -p is explicitly for host port mapping to container. That might be a useful case to allow IF it were used to trigger some special networking setup behind the scenes to extend such ports to the host only, but that may break the isolation model docker has established and is against changing in any way.

So the important thing here is that there are two separate layers, both of which bind a socket to accept inbound network connections.

If the process inside the container binds to 127.0.0.1, it’s unreachable from outside the container, period. It should bind to 0.0.0.0, which is generally the default. There’s still the layer of Docker outside of it, so absent a docker run -p option, even if it’s bound to “any IPv4 address”, it’s unreachable.

docker run -p can specify a specific IP address to bind to too, at the host level. It accepts inbound connections on the specified host port and forwards them to the specified container port.

I think your use case should be addressed fine if you docker run -p 127.0.0.1:xxxx:xxxx, and inside the container listen on all addresses. So change the tcpEchoTest script to listen on 0.0.0.0. It should be reachable from the same physical host, and unreachable from other hosts.

Awesome! Thanks for the tip and explanation, that does indeed work. It also explains how the nginx container works. That should resolve my issue if the testcase is any indication.

Greatly appreciated!