Hey Docker community!
I’m running a local multi-node Elasticsearch cluster for study purposes, but using a very DIY setup: all nodes (node1, node2, node3) run inside a single Debian-based Docker container. I’m not using the official Elastic image — I’m manually installing everything from the .tar.gz
binaries downloaded from elastic.co
.
Here’s how I start the container:
docker run --privileged -dit `
--name elastic-debian `
--hostname elastic-learning `
-p 9200:9200 `
-v "D:\Docker\Volumes\elastic-data:/opt/elastic" `
debian:11 tail -f /dev/null
Then, I docker exec
into it and do everything manually — install packages, create users, extract the Elastic tarball, configure nodes, set ports, etc.
Each node runs in its own folder and listens on its own port:
- node1: 9200 (published)
- node2: 9201 (internal only)
- node3: 9202 (internal only)
Everything is working great inside the container: the nodes form a cluster, /cat/nodes
sees everyone, and Kibana (running externally) connects via localhost:9200
just fine.
My question is: Is there any real need to -p
publish the other node ports (9201, 9202) externally, or is it fine to expose only node1’s port?
The idea is that node1 is acting as the coordinating node, and internally all other HTTP traffic between nodes stays inside the container. Since the cluster API is reachable through node1, my instinct says: “no need to publish the others.”
But I’d love confirmation from folks with Docker and networking experience: any admin/debug/monitoring use cases that would justify publishing other node HTTP ports? Or is this one-port setup perfectly acceptable for local clusters?
Also, if anyone wants to see the hilariously frustrating StackOverflow experience I had with this question being misunderstood and closed by people who read the word “expose” and lost their minds, here you go:
Thanks in advance to the Docker folks who actually read!