I have a few services I want to deploy to a few dozen internal servers and Docker sounds like a perfect choice! The services do the same job on every server. They need to speak to each other on a single host, but they don’t communicate across hosts (maybe they need to in the future, I don’t know yet). They are simple databases and accompanying applications, which get fed data from their hosts, and from time to time, an external server collects their data to create a report. The hosts themselves are each on different subnets, but ultimately in the same network.
So far, so simple.
What I find difficult to understand is the networking.
Docker per default creates a bridge network. If I keep the defaults, containers end up having the same IP addresses on every host. Would this be a problem? If so, what would you propose instead?
Those bridged networks by default are private - not routed - subnets.
For container to container communication user defined (docker) networks need to be created (docker-compose does this by default) and used. A container can access another container in the same user defined network by it’s service name (docker-compose) or container name.
For host/lan to container communication, container ports need to be published to host ports, which allows to access the container using the host_ip:host_port. If severall web applications are supposed to be reachable from the same port, then its good practice to use a reverse proxy like traefik with published ports and hostname or path based rules to forward traffic to the target container.
I highly recommend this free self-paced docker training. It provides a solid foundation about concepts and insight how things can be done.
Those slides are really helpful and go into much detail, thank you for that!
I take it there is no problem in having a container have the same IP address on different hosts, as long as there is no route for the container network to the docker host. That’s good to know!