Unbound container ports accept connections when using "Expose container ports on localhost"

Expected behavior

Connecting to port not bound in container should yield connection refused

Actual behavior

Connection is accepted and immediately closed when receiving data.
This breaks external port based health checks (like when waiting for container to initialize in integration tests)

Information

Windows 10 build 14372.rs1_release.160620-2342
Docker for Windows v1.12.0-rc3-beta18 build 5226 ec40b14

Steps to reproduce the behavior

  1. Enable experimental feature “Expose container ports on localhost” (Settings -> Network) and apply
  2. Start any container with port binding not bound within container:
    $ docker run --rm -it -p 12345:12345 busybox sh
  3. In another terminal:
    $ curl 127.0.0.1:12345 -v

    curl: (52) Empty reply from server

Should be something like:
curl: (7) Failed to connect to 127.0.0.1 port 12345: Connection refused

Thanks, I’ve created an internal issue to track!

Michael

This is still a problem for beta20, but now the “Expose container ports on localhost” option is gone (forced on)

Thanks for the report! Sorry for the unexpected change in behaviour between betas.

I think the difference in behaviour is caused by the two different port exposure techniques used by the docker engine itself i.e.

In the case of hairpin NAT, the incoming SYN will be rewritten by the NAT and sent to the container IP. If the container is not listening then the kernel will respond with a RST which will be rewritten by the NAT and returned to the client. In the case of userland proxying, the incoming SYN is ACKed first by the kernel when the proxy calls accept, and then the proxy sends a second SYN to the container IP. The kernel will RST the second SYN but it’s too late to prevent the first SYN being ACKed. The initial connection is therefore accepted and then immediately closed.

Writing health checks is definitely a tricky problem. I think the definition of “healthy” should ideally be that the application is able to do useful work. If an application consists of multiple containers (e.g. a web server and a database) it’s possible that the web server could accept connections before the database is really ready to commit updates (the docker/example-voting-app behaves like this). Ideally health checks would verify behaviour end-to-end to be really sure everything is ok.

Docker 1.12 has a new health check feature which can run scripts periodically for you. Have a look at the new HEALTHCHECK instruction – it tells Docker how to test a container to see if it’s still running. I think it’s worth the effort to create end-to-end health checks because they could be used later to monitor the application in production.