Host.docker.internal in production environment

Hello everyone,
I am trying to set up opentelemetry for my core java based application. I am using docker to launch containers for Grafana, tempo, loki and prometheus services. I am using prometheus as the metrics exporter prometheus exporter. In order to configure that in the development environment I have added a scrape job in the prometheus config like below:

- job_name: ‘my-service’
** static_configs:**
** - targets: [ ‘host.docker.internal:9464’]**

Also providing you the VM options in the run configuration of my main class,

-Dotel.metrics.exporter=prometheus
-Dotel.instrumentation.runtime-metrics.enabled=true
-Dotel.exporter.prometheus.host=localhost
-Dotel.exporter.prometheus.port=9464

Now it is working fine. I want to test this in the production environment. I have read the docker desktop documentation and got to know that host.docker.internal cannot be used in the production environment.
What should I specify instead of ‘host.docker.internal’ as the hostname in the production environment?
My machine is having a dynamic IP address also, so is there any way to test it with my IP itself?

Please feel free to give your suggestions and views on this. Thank you in advance.

Hahaha! Sorry, welcome to the club of the “real” Docker users :partying_face:

Docker CE on your server does not enable you to connect to your host’s localhost.

You can try network mode host. It’s not optimal because any port opened by the scarping container will be opened externally. But the container is within the hosts network, therefore can connect to other ports on the host.

When you want to do more monitoring of the host (e.g. sensors), you can enable more access rights with --privileged. Be aware that this is not available in Docke Swarm. Another trap for the to be Docker Pro :wink:

You can reach the Linux host by using one of the host’s ips. It is not uncommon to use the ip of the docker0 network interface for that purpose.

Once again a question that inspired me to try something. Actually I have been thinking for a while about trying to implement a feature like the one in Docker desktop that lets you use host.docker.internal to connect to the localhost of the host machine. I think I have done it.

So host.docker.internal is not just a hostname that directly points to the IP address of your host. It will point to the “host gateway” which can be configured but it is 172.17.0.1 by default which is the gateway of the default docker bridge. I will get back to this later.

You can actually add multiple aliases to that endpoint in your container using the special keyword “host-gateway”:

docker run --rm --add-host myhost:host-gateway nicolaka/netshoot ping myhost

It is important to use your host alias on the left side and the keyword on the right side in the value of --add-host. Now the ping command will return something like 192.168.65.254 depending on the subnet of Docker Desktop and not 172.17.0.1 because in Docker Desktop the ---host-gateway-ip parameter is set to a proxy ip and forward any request sent to any port to your host machine’s localhost probably using unix sockets and TCP sockets together.

You could use --host-gateway-ip on Linux without Docker Desktop, but that wouldn’t help unless you also run a proxy server on that IP. If you do that, you still need to configure a proxy server that forwards every request to the hosts localhost on the same port. You could do that probably but why forwarding every request when you can limit the access to the loopback interface on specific ports to specific containers.

This is my test compose file:

services:
  to-localhost:
    image: alpine/socat
    command:
      - UNIX-LISTEN:/var/sockets/to-unix.sock
      - TCP-CONNECT:127.0.0.1:9999
    volumes:
      - ./sockets:/var/sockets
    network_mode: host

  to-unix:
    image: alpine/socat
    command:
      - TCP-LISTEN:9999,reuseaddr,fork
      - UNIX-CONNECT:/var/sockets/to-unix.sock
    volumes:
      - ./sockets:/var/sockets
    depends_on:
      - to-localhost
    networks:
      default:
        aliases:
          - host.docker.internal

  test:
    image: nicolaka/netshoot
    command:
      - sleep
      - inf
    init: true
    depends_on:
      - to-unix

You can run a test service (I used python)

python3 -m http.server --bind 127.0.0.1 9999

Then you can test it in another terminal:

docker compose exec test curl host.docker.internal:9999

Explanation:

  • The “test” container is really just a container that has curl so I can test the ports. It is important to attach this container to the same network as th container called “to-unix”.
  • The container called “to-unix” uses alpine/socat to forward the request from the TCP port 9999 to the unix socket “var/sockets/to-unix.sock”, but that socket is in a folder which is mounted from the host.
  • The “to-unix” container also has a network alias host.docker.internal, but you could use any name. I just wanted to show that you can use this too.
  • The container “to-localhost” uses the host network, mounts the unix socket from the host and forwards all requests from that unix socket to the host machine’s localhost.

Some additional notes

  • You could create multiple forwarder containers like “to-9464” and “to-80”, but then you couldn’t use the same network alias for all forwarders…
  • You could also use this solution to forward UDP ports, but I only demonstrated TCP ports.
  • I used a single compose file, but you could have separate compose projects for the forwarders and when you want a container to have access to a forwarder, attach the forwarders’ network to the container.
1 Like

Thank you so much for your answer.
I would like to know if I can replace host.docker.internal with the IP address of my machine when I want to implement this setup in the production environment?

Yes, you can use the IP of the host, but not 127.0.0.1.

In Linux on your host, use hostname -I to see all assigned IPs.

Thank you. One more thing, while testing in the development environment I removed the system properties for host . i.e., -Dotel.exporter.prometheus.exporter=localhost and just passed only the port and it worked successfully. So in the production environment also can I go with the same approach, i.e… without providing any host in the VM options but providing <my-ip>:9464 in the scrape jobs?
Or should I set the system properties with host as my IP in the VM options also?

should be fine. Just try it :slightly_smiling_face: