Docker Community Forums

Share and learn in the Docker community.

How to reach localhost on host from docker container?

How to reach localhost on host from docker container?

Please help a docker newbie

Situation: I run a NodeJS app with the monero-javascript library to connect to a localhost monero-wallet-rpc running on my host OS.

Problem: I can not connect!

My system:

  • up to date Debian 10
  • Docker version 20.10.7, build f0df350
  • docker-compose version 1.29.2, build 5becea4c

My docker-compose file:

version: "3.8"
services:
  app:
    build: .
    env_file:
      - .env
    command: ["npm", "start"]
    restart: always
    ports:
      - 80:3000
    external_links:
      - mongo
    extra_hosts:
      - "host.docker.internal:host-gateway"

  mongo:
    image: mongo
    restart: always
    ports:
      - 27017:27017
    volumes:
      - ./mongodb/data:/data/db
    env_file:
      - .env

  mongo-seed:
    build: ./mongo-seed/.
    links:
      - mongo

volumes:
  db-data:
    name: mongodb

I thought extra_hosts: - "host.docker.internal:host-gateway" is the trick. Sadly it does not work.

Apart from my connection problem, I would also like to receive further suggestions for improving my configuration file.
Is it possible to seed a MongoDB database directly with data? I have done the trick with another container.

1 Like

For Docker running on Windows I have used directly host.docker.internal (without the extra_hosts in the docker-compose.yml) to connect to services running on the WIndows-host from inside a Docker-container.

For Docker running on Linux I have used 172.17.0.1 which is the Docker-host in Docker’s default-network. Maybe not the best way - but it is working :slight_smile:



For the other problem (seeding a MongoDB with data) I have done a backup/restore from one mongo-server to another using this command to create a dump on Server A into the directory ./dump/<sourcedatabase>

mongodump --host "<sourcehost>" --username "<sourceuser>" --password "<sourcepassword>" --db "<sourcedatabase>"

and this to restore it to Server B:

mongorestore --host "<destinationhost>" --db "<destinationdatabase>" --username "<destinationuser>" --password "<destinationpassword>" dump/<sourcedatabase>/

Maybe this can be used as a blueprint?

Thanks. However, I am not able to fix it yet.
What by meaning without extra_hosts:?
This way creates a syntax error.

version: "3.8"
services:
  app:
    build: .
    env_file:
      - .env
    command: ["npm", "start"]
    restart: always
    ports:
      - 80:3000
    external_links:
      - mongo
    "host.docker.internal:host-gateway"

From inside my container:

root@45d96a2fc103:/usr/src/app# cat /etc/hosts 
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1      host.docker.internal
172.18.0.2      45d96a2fc103
root@45d96a2fc103:/usr/src/app# getent hosts host.docker.internal
172.17.0.1      host.docker.internal

My .env file for my node app:

MONERO_WALLET_RPC_URI = "host.docker.internal:38083"

I don’t know why I can’t connect. I looked into it multiple hours.
Can it be a routing/firewall problem of the host?
I am using QubesOS which can be an exotic animal.

If you need more information/output I will post it here.

For me it looks like the

extra_hosts:
      - "host.docker.internal:host-gateway"

works. However, the connection isn’t?

1 Like

Good morning,

sorry for confusing you.

I meant it this way:
remove the lines

extra_hosts:
      - "host.docker.internal:host-gateway"

from your docker-compose.yml

replace the lines

external_links:
  - mongo

by

links:
  - mongo

within your docker-compose.yml’s app-section because you want to connect to a container defined within the same docker-compose.yml

To connect to the mongo-container from app-container use the hostname mongo

To connect to your host’s localhost from within a container use 172.17.0.1 (as you are running on Linux). It would be host.docker.internal if you running Docker on Windows.

    extra_hosts:
      - "host.docker.internal:host-gateway"

would work as:

    extra_hosts:
      - "host.docker.internal:172.17.0.1"

This is neither a requirement, nor is it wrong to use extra_hosts to let the container inject this name resolution into the containers /etc/host. Or you just use the host’s hostname or ip.

If this is not working, replace 172.17.0.1 (ip of docker0) with 172.18.0.1 (ip of docker_gwbridge).

Do I need to adjust my .env file as well?

How can I check if docker is not the problem?

I created a stackoverflow question with much more detail. This bugs kills me. Maybe you can have a look:

Your .env file addresses the host by the hostname you inject into the container by using extra_hosts. Like I wrote if the ip of doker0 doesn’t work, try the ip of the docker_gwbridge. Judging by the /etc/hosts file of your node container, you should use the ip of the docker_gwbridge.

    extra_hosts:
      - "host.docker.internal:172.18.0.1"

Does not work!
Please have a look into the stackoverflow question.

My error:

MoneroRpcError: RequestError: Error: socket hang up

at this line:

await moneroWalletRPC.openWallet(
  process.env.MONERO_WALLET_USER,
  process.env.MONERO_WALLET_PASSWORD
);

have you found any solution yet?

Pathetic lack of clarity! So is an IP required or not? Can you use internal docker names or not? It seems not, but if not then why? Why should you not be able to inject hostnames of other docker containers into /etc/hosts? If a container is called “app” then other containers can “ping app” via docker networking. So, why can I not inject “app” into a /etc/hosts?

When a question is not clear, there is a good chance the answers will not clear either. It always depends on the exact case.

The title of this topic is “How to reach localhost on host from docker container”.
Since there wasn’t any example how the OP started the app on the host, based on the docker-compose.yml and how the OP tried to use the extra hosts, I suppose, everyone thought the nodejs app was listening on each available IP address on the host. Later @scottbarnes2 shared a link to stackoverflow where the example shows that the nodejs app was actually running on the host’s localhost (127.0.0.1).

It means the container will never be able to reach the nodejs app unless the container is running on the host’s network using the network_mode: host

See Compose file version 3 reference | Docker Documentation

If you are referring to the internal dns resolution, the anyswer is yes. You can use that between containers running on the same custom docker network.

Why do you think you shouldn’t? You can add extra hosts but that doesn’t help you to reach a port on a network which is not available from the container.

Since the OP could have different problem than yours, can you explain what is your issue exactly? Please, add details to help us understand what you want to achieve, how you tried to solve that and what are the error messages.

Apologies for any impatience. I’ll stop trying to convert OP’s issue into my issue, and just explain my exact issue:

I want a FQDN to resolve using /etc/hosts + self-signed SSL cert. An app in the container needs to access the FQDN, but this FQDN needs to be simulated in a dev environment. So on the host I have the FQDN in /etc/hosts and I have a self-signed cert correctly setup with docker for each FQDN in question. The FQDNs are bound to each docker container running on the host. Of course each container can ping each other using its service name, but how can I allow each container to connect to another using its FQDN?
So the idea was to add an /etc/hosts per container that converts each FQDN into the correct IP to connect to each respective container. However, those containers are setup programmatically, so I’d rather not statically reference each IP in the “extra_hosts:” entry. How can I do this dynamically (whereas each FQDN = a service name)?

If you add a host to your hosts file on the host machine, containers will not be able to use that unless the containers use the host network. Since you wrtote that you would not use the extra_hosts parameter and using the host network would be even more problematic, I would say try to configure a local DNS server and set that DNS server on the Docker daemon. dockerd | Docker Documentation

You can search for something that dynamically handle your domain names. I found this project but I haven’t tried it so I have no idea if it works or not.

If it doesn’t work, try to search for an other.