Accessing Docker container from the host without publishing port (DFM/DFW)

What are the possible ways to isolate network communication of a Docker container with the host machine (running a Docker daemon), that is to make it possible to connect to a Docker container from the host machine without allowing the host logged in users to access that Docker service via certain port? Is this at all possible?

I have a service that used to be running as a Docker container that was connecting in turn to another container by employing the Docker links (that is exposing ports without publishing them). Now, that service is running on the host system, still I need to communicate with some Docker container. Is it at all possible to somehow isolate this communication from curious host users without implementing token / authentication security measures?

Thank you!

This question depends on what you want to access and how you access it.
Depending on how you are wanting to connect to your container, you can use docker exec and only users in docker group (if available) or sudo group will be able to access it. If someone needs some sudo privileges but not docker, you will need to create a group with certain permissions and add them to that group while removing them from the sudo group.
If you are connecting via an exposed port on the host, then the access is controlled by the app that is running on the exposed port or any security measures in the container that you put in place between the app and the port. This would be the same as any app hosted on the machine that has an exposed port.

If you are looking for what replaced the legacy link between dockers, then I believe what you are looking for is covered here.

(The following is based mostly on my observations, not documentation)
Basically, all containers start in default network “bridge”. In this configuration, all containers can see every port on every other container. You can access each container by its ip (which of course is likely to change). I believe you can not have dockers access each other by name (which by default is disabled) in the bridge network because the docker dns server is not added to the resolv.conf file in the bridge network, it’s special that way.

To have containers access each other by name, you need to place them in a network you create using the docker network command. Creating your own docker network also allows you to further partition the docker container access up, so that only certain containers can access the ports of other containers. You create networks with the docker network command, and execute containers in a specific network with the --network argument when running them.

Once you are in your network (which is still type bridge, so don’t be confused by that. It’s just not named bridge), You can have each container discover the ip address via name (which by default is disabled when you get a random name). If you want to make something to work well, you typically want to assign your container a name with the --name argument when running it.

Putting it all together:

  1. docker network create my_network
  2. docker run -d --network my_network --name pingee debian:jessie sleep 1000
  3. docker run -it --rm --network my_network debian:jessie ping pingee

@stuartz thanks! yes, that was i thinking to employ operating system facilities around the Docker connection is maybe a way to go, but this isn’t portable (need to support Linux, Mac, and Windows). Don’t want to resort to using iptables directly.

@andyneff thanks very much for your reply! Very useful indeed, didn’t know the links mechanism is now considered legacy. Still what would I like is to connect host-to-container, not container-to-container. Links work fine for container-to-containers right now.