I thought extra_hosts: - "host.docker.internal:host-gateway" is the trick. Sadly it does not work.
Apart from my connection problem, I would also like to receive further suggestions for improving my configuration file.
Is it possible to seed a MongoDB database directly with data? I have done the trick with another container.
For Docker running on Windows I have used directly host.docker.internal (without the extra_hosts in the docker-compose.yml) to connect to services running on the WIndows-host from inside a Docker-container.
For Docker running on Linux I have used 172.17.0.1 which is the Docker-host in Dockerâs default-network. Maybe not the best way - but it is working
For the other problem (seeding a MongoDB with data) I have done a backup/restore from one mongo-server to another using this command to create a dump on Server A into the directory ./dump/<sourcedatabase>
I donât know why I canât connect. I looked into it multiple hours.
Can it be a routing/firewall problem of the host?
I am using QubesOS which can be an exotic animal.
If you need more information/output I will post it here.
within your docker-compose.ymlâs app-section because you want to connect to a container defined within the same docker-compose.yml
To connect to the mongo-container from app-container use the hostname mongo
To connect to your hostâs localhost from within a container use 172.17.0.1 (as you are running on Linux). It would be host.docker.internal if you running Docker on Windows.
This is neither a requirement, nor is it wrong to use extra_hosts to let the container inject this name resolution into the containers /etc/host. Or you just use the hostâs hostname or ip.
If this is not working, replace 172.17.0.1 (ip of docker0) with 172.18.0.1 (ip of docker_gwbridge).
Your .env file addresses the host by the hostname you inject into the container by using extra_hosts. Like I wrote if the ip of doker0 doesnât work, try the ip of the docker_gwbridge. Judging by the /etc/hosts file of your node container, you should use the ip of the docker_gwbridge.
Pathetic lack of clarity! So is an IP required or not? Can you use internal docker names or not? It seems not, but if not then why? Why should you not be able to inject hostnames of other docker containers into /etc/hosts? If a container is called âappâ then other containers can âping appâ via docker networking. So, why can I not inject âappâ into a /etc/hosts?
When a question is not clear, there is a good chance the answers will not clear either. It always depends on the exact case.
The title of this topic is âHow to reach localhost on host from docker containerâ.
Since there wasnât any example how the OP started the app on the host, based on the docker-compose.yml and how the OP tried to use the extra hosts, I suppose, everyone thought the nodejs app was listening on each available IP address on the host. Later @scottbarnes2 shared a link to stackoverflow where the example shows that the nodejs app was actually running on the hostâs localhost (127.0.0.1).
It means the container will never be able to reach the nodejs app unless the container is running on the hostâs network using the network_mode: host
If you are referring to the internal dns resolution, the anyswer is yes. You can use that between containers running on the same custom docker network.
Why do you think you shouldnât? You can add extra hosts but that doesnât help you to reach a port on a network which is not available from the container.
Since the OP could have different problem than yours, can you explain what is your issue exactly? Please, add details to help us understand what you want to achieve, how you tried to solve that and what are the error messages.
Apologies for any impatience. Iâll stop trying to convert OPâs issue into my issue, and just explain my exact issue:
I want a FQDN to resolve using /etc/hosts + self-signed SSL cert. An app in the container needs to access the FQDN, but this FQDN needs to be simulated in a dev environment. So on the host I have the FQDN in /etc/hosts and I have a self-signed cert correctly setup with docker for each FQDN in question. The FQDNs are bound to each docker container running on the host. Of course each container can ping each other using its service name, but how can I allow each container to connect to another using its FQDN?
So the idea was to add an /etc/hosts per container that converts each FQDN into the correct IP to connect to each respective container. However, those containers are setup programmatically, so Iâd rather not statically reference each IP in the âextra_hosts:â entry. How can I do this dynamically (whereas each FQDN = a service name)?
If you add a host to your hosts file on the host machine, containers will not be able to use that unless the containers use the host network. Since you wrtote that you would not use the extra_hosts parameter and using the host network would be even more problematic, I would say try to configure a local DNS server and set that DNS server on the Docker daemon. dockerd | Docker Documentation
You can search for something that dynamically handle your domain names. I found this project but I havenât tried it so I have no idea if it works or not.
With that aliases, ânginxâ container will be reachable from âapiâ container on http://api.localhost.
No need to update â/etc/hostsâ inside âapiâ container.
Does work under Linux, if you donât want to use the network mode host in Docker.
However, make sure you set: live-restore: false (it should false by default, but maybe you changed the /etc/docker/daemon.json file), otherwise host.docker.internal doesnât seem to resolve anymore under LinuxâŠ
works fine on the Macbook, with the env file for my project pointing to host.docker.internal to reach the postgres database running on the local host. But it does not work on the Linux server. any clue on how to change this?
Mac is runing:
docker version
Client:
Cloud integration: v1.0.22
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:46:56 2021
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Desktop 4.5.0 (74594)
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:07 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
OpenSuse 15.4 is running:
docker version
Client:
Version: 20.10.17-ce
API version: 1.41
Go version: go1.17.13
Git commit: a89b84221c85
Built: Wed Jun 29 12:00:00 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.17-ce
API version: 1.41 (minimum version 1.12)
Go version: go1.17.13
Git commit: a89b84221c85
Built: Wed Jun 29 12:00:00 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.6.12
GitCommit: a05d175400b1145e5e6a735a6710579d181e7fb0
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-ga916309fff0f
docker-init:
Version: 0.1.7_catatonit
GitCommit:
First of all you donât need the extra hosts on Mac, since host.docker.internal is built into Docker Desktop.
And second, host-gateway will not be your original host IP, but the IP of the gateway on the Docker network that your container use. Your host is the gateway, so you can access services listening on that IP. Which means if the database is not listening on every IP address or at least the gateway IP of the Docker network, only on the host IP, it will not work. nslookup will also not work as the extra host will not add the host to a DNS server, only to the host file in the container, so you can use curl, wget or ping, but not nslookup (in case you have an anslokup test before connecting)