Communication between Docker containers

Dear community,

I have a more academical question about some best practices regarding Docker and networks. I’m running the smart home platform openHAB on my Raspberry Pi via docker compose, consisting of two containers which are essential to run the system. Therefore, having the containers in the same network makes sense to me.

Now I’ve built a custom service which reads some data from openHAB’s REST API. To make this working, I had to add my new service to openHAB’s docker-compose.yml. But from openHAB’s perspective, my new service is just a client and IMHO openHAB should not know about its consumers - therefore adding the new service to the openHAB network feels a bit odd to me.

When not placing my new service in the same network together with openHAB, my service’s container simply cannot access openHAB via 127.0.0.1:8443 (where 8443 is the port that openHAB listens to).

So my question is: does my “architecture” violate certain Docker/networking best practices and if yes, what would be a cleaner approach?

Thank you :wave:t2:

I am not sure if I am able to follow that though: how would sharing a same container network make “openHap know about the consumer” and not sharing the same container network would not?

Services don’t necessarily have to be in the same compose file to share a network. Just create a network from the cli, and declare it in both compose files as external, then configure them as network on your service.

Either both of those approaches are fine, it’s up to you if you want to separate the client service out in a different compose file or not.

Why would you expect a container to reach the port of another container using localhost? This expectation is only correct, if both container are running with --network=host and therefore use the hosts network namespace. In that scenario localhost on the host and in those containers would be the same localhost.

Though, if a container is attached to a bridge network, it has it’s own network namespace, so that localhost in a container is not the same localhost as in another container or the host.

For container to container communication within the same container network, the service, or container name should be used.

1 Like

Thank you @meyay for your in-depth explanation!

So put in other words: in Docker, there is no other way than putting two containers A and B in the same Docker network if A needs to access B, although it implies that B could access A as well.

Is my understanding correct?

I’m just not (yet :wink: ) comfortable with this symmetry, that’s why this maybe trivial question.

You asked for the best practice. Attaching containers that communicate with each other in a shared network is the best practice.

Though, it is not the only possible solution. You could also put the client container in a separate network and use the host ip and published host port of the openhab container to access it. The drawback of this approach is that you might need to publish ports of a container that wouldn’t be required if a container network was used. This is nothing you would want to do, if your Docker Host is directly exposed to the internet (=has a public ip).

2 Likes

Thank you very much, I really appreciate your feedback!

I’m hardly an expert on this, but if A is a “client” only, then it won’t have any ports available for B to access it on. If it’s exposing a port that B could use, then it isn’t purely a client. If you were running A and B on your own desktop, rather than in containers, B still couldn’t access A.

Now, I can imagine a case were A is both a client and a server, but you only want it to use its client for some purpose. In that case, I’d put A in its own container, and not expose the inbound port.