Resolving public addresses on a custom bridge network

I am running two containers on the same custom bridge network (a traefik instance, and a FileMaker server), hosted on an AWS EC2 instance (running Ubuntu 20.04).

My FileMaker container cannot resolve public addresses

Looking at /etc/resolvconf.conf, it has a loopback for the dns server, which I think is expected, to allow the container to find the traefik container:

options edns0 trust-ad ndots:0

However, whatever the dns server is at, it clearly doesn’t resolve external addresses, as my FileMaker container is unable to run apt update, etc. unless I manually change the nameserver to a public DNS server. Of course, then it cannot connect to the traefik container.

My question is, what is the best way to allow my FileMaker container to resolve both local containers and public addresses?

The external address resolution is necessary for an internal service in the container to check an external licensing server in order to run properly.

I’m sorry if I’m asking stupid questions — I just getting to grips with Docker.

Docker’s resolver will use the upstream resolver(s) configured in the hosts /etc/resolv.conf file. In AWS, it usualy is the 2nd ip in your VPC CIDR range. e.g. VPC CDIR=, the entry should point to

It is normal that /etc/resolv.conf inside a container using a docker user defined network (!=default docker bridge network) uses the docker resolver for it’s service discovery.

Thanks very much for your help!
Something odd is going on…
My AWS VPC is on the CIDR range, and my EC2 instance has an IP of

However, my instance (just an out-of-the-box Ubuntu image) has the /etc/resolv.conf of:

# This file is managed by man:systemd-resolved(8). Do not edit.
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

options edns0 trust-ad
search eu-west-2.compute.internal

…and my container is getting the /etc/resolv.conf of:

search eu-west-2.compute.internal
options edns0 trust-ad ndots:0

If I manually edit the file to use my container can access the outside world.

Not sure what’s going on!



What you see is the ip of the systemd dns-stub resolver. I always deactivate it on Ubuntu nodes that run docker.

You can disable the stub resolver by creating a file in /etc/systemd/resolved.conf.d/, e.g. /etc/systemd/resolved.conf.d/resolved.conf with following content:


Then restart the service: systemctl restart systemd-resolved.service

To make sure /etc/resolv.conf points to the correct target file, you need to check the target of the /etc/resolv.conf symlink: ls -l /etc/resolv.conf

  • if it points to /run/systemd/resolve/resolv.conf the symlink is correct
  • if it points to /run/systemd/resolve/stub-resolv.conf then the symlink is wrong and needs to be unlinked/deted with sudo unlink /etc/resolv.conf and re-created with sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Thanks — that makes sense (after a bit of reading-up on unix/linux DNS resolvers!)