How to reach localhost on host from docker container?

How to reach localhost on host from docker container?

Please help a docker newbie

Situation: I run a NodeJS app with the monero-javascript library to connect to a localhost monero-wallet-rpc running on my host OS.

Problem: I can not connect!

My system:

  • up to date Debian 10
  • Docker version 20.10.7, build f0df350
  • docker-compose version 1.29.2, build 5becea4c

My docker-compose file:

version: "3.8"
services:
  app:
    build: .
    env_file:
      - .env
    command: ["npm", "start"]
    restart: always
    ports:
      - 80:3000
    external_links:
      - mongo
    extra_hosts:
      - "host.docker.internal:host-gateway"

  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    volumes:
      - ./mongodb/data:/data/db
    env_file:
      - .env

  mongo-seed:
    build: ./mongo-seed/.
    links:
      - mongo

volumes:
  db-data:
    name: mongodb

I thought extra_hosts: - "host.docker.internal:host-gateway" is the trick. Sadly it does not work.

Apart from my connection problem, I would also like to receive further suggestions for improving my configuration file.
Is it possible to seed a MongoDB database directly with data? I have done the trick with another container.

1 Like

For Docker running on Windows I have used directly host.docker.internal (without the extra_hosts in the docker-compose.yml) to connect to services running on the WIndows-host from inside a Docker-container.

For Docker running on Linux I have used 172.17.0.1 which is the Docker-host in Docker’s default-network. Maybe not the best way - but it is working :slight_smile:



For the other problem (seeding a MongoDB with data) I have done a backup/restore from one mongo-server to another using this command to create a dump on Server A into the directory ./dump/<sourcedatabase>

mongodump --host "<sourcehost>" --username "<sourceuser>" --password "<sourcepassword>" --db "<sourcedatabase>"

and this to restore it to Server B:

mongorestore --host "<destinationhost>" --db "<destinationdatabase>" --username "<destinationuser>" --password "<destinationpassword>" dump/<sourcedatabase>/

Maybe this can be used as a blueprint?

1 Like

Thanks. However, I am not able to fix it yet.
What by meaning without extra_hosts:?
This way creates a syntax error.

version: "3.8"
services:
  app:
    build: .
    env_file:
      - .env
    command: ["npm", "start"]
    restart: always
    ports:
      - 80:3000
    external_links:
      - mongo
    "host.docker.internal:host-gateway"

From inside my container:

root@45d96a2fc103:/usr/src/app# cat /etc/hosts 
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1      host.docker.internal
172.18.0.2      45d96a2fc103
root@45d96a2fc103:/usr/src/app# getent hosts host.docker.internal
172.17.0.1      host.docker.internal

My .env file for my node app:

MONERO_WALLET_RPC_URI = "host.docker.internal:38083"

I don’t know why I can’t connect. I looked into it multiple hours.
Can it be a routing/firewall problem of the host?
I am using QubesOS which can be an exotic animal.

If you need more information/output I will post it here.

For me it looks like the

extra_hosts:
      - "host.docker.internal:host-gateway"

works. However, the connection isn’t?

1 Like

Good morning,

sorry for confusing you.

I meant it this way:
remove the lines

extra_hosts:
      - "host.docker.internal:host-gateway"

from your docker-compose.yml

replace the lines

external_links:
  - mongo

by

links:
  - mongo

within your docker-compose.yml’s app-section because you want to connect to a container defined within the same docker-compose.yml

To connect to the mongo-container from app-container use the hostname mongo

To connect to your host’s localhost from within a container use 172.17.0.1 (as you are running on Linux). It would be host.docker.internal if you running Docker on Windows.

    extra_hosts:
      - "host.docker.internal:host-gateway"

would work as:

    extra_hosts:
      - "host.docker.internal:172.17.0.1"

This is neither a requirement, nor is it wrong to use extra_hosts to let the container inject this name resolution into the containers /etc/host. Or you just use the host’s hostname or ip.

If this is not working, replace 172.17.0.1 (ip of docker0) with 172.18.0.1 (ip of docker_gwbridge).

1 Like

Do I need to adjust my .env file as well?

How can I check if docker is not the problem?

I created a stackoverflow question with much more detail. This bugs kills me. Maybe you can have a look:
https://stackoverflow.com/questions/68630037/docker-nodejs-app-connection-problem-to-host-monero-wallet-rpc

Your .env file addresses the host by the hostname you inject into the container by using extra_hosts. Like I wrote if the ip of doker0 doesn’t work, try the ip of the docker_gwbridge. Judging by the /etc/hosts file of your node container, you should use the ip of the docker_gwbridge.

    extra_hosts:
      - "host.docker.internal:172.18.0.1"

Does not work!
Please have a look into the stackoverflow question.

My error:

MoneroRpcError: RequestError: Error: socket hang up

at this line:

await moneroWalletRPC.openWallet(
  process.env.MONERO_WALLET_USER,
  process.env.MONERO_WALLET_PASSWORD
);

have you found any solution yet?

Pathetic lack of clarity! So is an IP required or not? Can you use internal docker names or not? It seems not, but if not then why? Why should you not be able to inject hostnames of other docker containers into /etc/hosts? If a container is called “app” then other containers can “ping app” via docker networking. So, why can I not inject “app” into a /etc/hosts?

When a question is not clear, there is a good chance the answers will not clear either. It always depends on the exact case.

The title of this topic is “How to reach localhost on host from docker container”.
Since there wasn’t any example how the OP started the app on the host, based on the docker-compose.yml and how the OP tried to use the extra hosts, I suppose, everyone thought the nodejs app was listening on each available IP address on the host. Later @scottbarnes2 shared a link to stackoverflow where the example shows that the nodejs app was actually running on the host’s localhost (127.0.0.1).

It means the container will never be able to reach the nodejs app unless the container is running on the host’s network using the network_mode: host

See Compose file version 3 reference | Docker Docs

If you are referring to the internal dns resolution, the anyswer is yes. You can use that between containers running on the same custom docker network.

Why do you think you shouldn’t? You can add extra hosts but that doesn’t help you to reach a port on a network which is not available from the container.

Since the OP could have different problem than yours, can you explain what is your issue exactly? Please, add details to help us understand what you want to achieve, how you tried to solve that and what are the error messages.

1 Like

Apologies for any impatience. I’ll stop trying to convert OP’s issue into my issue, and just explain my exact issue:

I want a FQDN to resolve using /etc/hosts + self-signed SSL cert. An app in the container needs to access the FQDN, but this FQDN needs to be simulated in a dev environment. So on the host I have the FQDN in /etc/hosts and I have a self-signed cert correctly setup with docker for each FQDN in question. The FQDNs are bound to each docker container running on the host. Of course each container can ping each other using its service name, but how can I allow each container to connect to another using its FQDN?
So the idea was to add an /etc/hosts per container that converts each FQDN into the correct IP to connect to each respective container. However, those containers are setup programmatically, so I’d rather not statically reference each IP in the “extra_hosts:” entry. How can I do this dynamically (whereas each FQDN = a service name)?

If you add a host to your hosts file on the host machine, containers will not be able to use that unless the containers use the host network. Since you wrtote that you would not use the extra_hosts parameter and using the host network would be even more problematic, I would say try to configure a local DNS server and set that DNS server on the Docker daemon. dockerd | Docker Documentation

You can search for something that dynamically handle your domain names. I found this project but I haven’t tried it so I have no idea if it works or not.

If it doesn’t work, try to search for an other.

1 Like

Hello,
another simple and useful solution to resolve this problem with Compose is to use “network aliases” : Networking in Compose | Docker Documentation

version: "3.7"

services:
  api:
    links:
      - nginx:api.localhost
  nginx:

With that aliases, “nginx” container will be reachable from “api” container on http://api.localhost.
No need to update “/etc/hosts” inside “api” container.

  extra_hosts:
    - "host.docker.internal:host-gateway"

Does work under Linux, if you don’t want to use the network mode host in Docker.

However, make sure you set: live-restore: false (it should false by default, but maybe you changed the /etc/docker/daemon.json file), otherwise host.docker.internal doesn’t seem to resolve anymore under Linux


This this bug is still not fixed in a 20.10 release: Fixed docker.internal.gateway not displaying properly on live restore by sanchayanghosh · Pull Request #42785 · moby/moby · GitHub

I’ve been pulling my hair out here. using:

  extra_hosts:
    - "host.docker.internal:host-gateway"

works fine on the Macbook, with the env file for my project pointing to host.docker.internal to reach the postgres database running on the local host. But it does not work on the Linux server. any clue on how to change this?

Mac is runing:

docker version
Client:
 Cloud integration: v1.0.22
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:46:56 2021
 OS/Arch:           darwin/arm64
 Context:           default
 Experimental:      true

Server: Docker Desktop 4.5.0 (74594)
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:43:07 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

OpenSuse 15.4 is running:

docker version
Client:
 Version:           20.10.17-ce
 API version:       1.41
 Go version:        go1.17.13
 Git commit:        a89b84221c85
 Built:             Wed Jun 29 12:00:00 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.17-ce
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.13
  Git commit:       a89b84221c85
  Built:            Wed Jun 29 12:00:00 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.6.12
  GitCommit:        a05d175400b1145e5e6a735a6710579d181e7fb0
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-ga916309fff0f
 docker-init:
  Version:          0.1.7_catatonit
  GitCommit:

First of all you don’t need the extra hosts on Mac, since host.docker.internal is built into Docker Desktop.
And second, host-gateway will not be your original host IP, but the IP of the gateway on the Docker network that your container use. Your host is the gateway, so you can access services listening on that IP. Which means if the database is not listening on every IP address or at least the gateway IP of the Docker network, only on the host IP, it will not work. nslookup will also not work as the extra host will not add the host to a DNS server, only to the host file in the container, so you can use curl, wget or ping, but not nslookup (in case you have an anslokup test before connecting)

2 Likes

Thanks Sir. This information really helped me.