so I had to disable dnsmasq, systemd-resolved.service, resolvconf.service and make my network use my host IP as DNS resolver.
With that configuration I’m having issues with docker containers:
At first I could not pull images from docker hub (I was getting Error response from daemon: Get "https://ghcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) and Error response from daemon: Get "https://ghcr.io/v2/": context deadline exceeded errors).
I fixed by specifying 18.104.22.168 as dns-opts in /etc/docker/daemon.json!
Now another issue is that all containers (not having network_mode: host) have no internet access. I noticed that they use 127.0.0.11 as DNS resolver.
The only way to fix that is using dns: 22.214.171.124 (in Docker Compose) for each Docker stack.
But that’s something I don’t want and none of my containers is using my dnscrypt-proxy to resolve addresses, so that’s not ok for me!
What options do I have to fix and let all containers have internet connection passing through my dnscrypt-proxy?
So you disabled the DNS resolver on your node, replaced it with a Docker container. From my point of view your system should continue to work as before.
Did you configure the upstream DNS resolver for your resolver, though? Otherwise the chain of DNS lookup can not work. (Resolver: “Do I know the hostname, then I return the IP. If not I will ask the next one”)
Indeed, the entire host system is correctly connected to the Internet fine, but the Docker containers!
I have multiple Docker Compose stacks and all the ones using the automatically created compose network (and not using network_mode: host) have no Internet connection!
What do you mean by configure the upstream DNS resolver for your resolver? The dnscrypt-proxy is configured using listen_addresses = [":53"] to listen to all interfaces and the related Docker Compose as you can see from the first post!
As I tried to explain, a DNS resolver can only resolve it’s own known domains. If the domain is not known, it will forward the DNS request to an upstream DNS server (like 126.96.36.199). You probably need to configure that somewhere.
Using 127.0.0.11 in containers that are connected to a custom docker network is by design. That will allow you to use container names and compose service names as domain names. The request sent to this IP address is forwarded to the DNS server configured on the host or the one that you set in the compose file what you don’t want to do.
The point is that since the DNS request should be forwarded to the IP address that was configured on the host, everything should work unless there is something wrong with the DNS configuration and containers can’t reach the IP address of the DNS server on the host.
How did you configured your host to use the new DNS server?
For example, if you configured it to use 127.0.0.1 that works from your host but not from containers unless the containers use the host network. That could explain everything.
You could try to use nicolaka/netshoot to use nslookup, ping or whatever tool you like to find out what the problem is. You mentioned multple times that you dont have internet access, but my guess is that internet acess is fine, only the DNS requests don’t work.
yes you’re right, DNS requests don’t work. My host is configured to use the local IP of the machine (not 127.0.0.1) where dnscrypt-proxy is running, which is the host itself.
To troubleshoot the issues I just used curl or wget from the containers and I get could not resolve ... error so yes, it’s a DNS issue. The thing is that I don’t understand why!
Some more stuff regarding my host configuration, that could be useful hopefully:
/etc/resolv.conf is always empty
I disabled dnsmasq
I disabled systemd-resolved
I disabled resolvconf
The /etc/NetworkManager/NetworkManager.conf has the dns=... option commented!
And have you tried using nslookup as I suggested from a container? That would reveal what DNs server the containers are trying to use. You could try nslookup with ans without parameter. The parameter would be the address of the DNS server. If it works with parameter, that means the default address detected by Docker is wrong.
Then the difference I see is that when you want to access the DNS server from a container, the DNS container will see that the request come from a docker network, but when you try to use the same IP on the host, the DNS container can see that you are coming from a machine that has the host’s IP address. When you try this on the host:
nslookup google.com localhost
The DNS container should see the same when you tried to access it from another container (you are coming from a docker network)
And now I realized how badly I wrote my earlier comment when I asked you to use nslookup:
I mean’t with one or two parameters.
Please, try nslookup google.com localhost and also the following containers to test your local firewall (in case there is any you don’t know of):
docker network create network_test
docker run -d --name network_test_httpd --network network_test -p 80:80 httpd:2.4
docker run --rm -it --network test nicolaka/netshoot curl 192.168.1.42
docker logs network_test_httpd
Change the port on the left side if port 80 is not available, but then you need to add the port number in the curl command.
You should see something like this at the end of the logs:
If you get timeout again, then there must be a firewall on the host. If the command works, and the log entry appears showing the container gateway IP, then it’s the ndscrypt-proxy that doesn’t accept the request somehow.