Networking to set up an OAuth flow on localhost-based dev environment

I have what I think is a pretty typical homelab setup. Here’s an abridged version:

.
├── authentik
│   ├── docker-compose.yml
│   └── .env
├── caddy
│   ├── docker-compose.yml
│   └── .env
└── immich
    ├── docker-compose.yml
    └── .env

I have an external network reverse_proxy that each of Caddy, Immich, and Authentik are on.

In “production”, I have an actual domain name that I’m using which I think will make things easier, but I’m trying to figure out the best way to set things up on a localhost dev environment. Here’s the Caddyfile:

{$SCHEME:"http://"}{$DOMAIN:localhost}, {$SCHEME:"http://"}*.{$DOMAIN:localhost} {

    @root host {$DOMAIN:localhost}
    handle @root {
        respond "Hello, world!" 200
    }

    @authentik host authentik.{$DOMAIN:localhost}
    handle @authentik {
        reverse_proxy authentik-server:9000
    }

    @immich host immich.{$DOMAIN:localhost}
    handle @immich {
        reverse_proxy immich_server:2283
    }

    handle {
        respond "Unknown subdomain" 404
    }
}

Normally this works fine in that I can either do <service>.localhost to get to a service, or communicate between services with the service name (e.g. http://immich_server:2283) since everything is on the same network.

But Oauth complicates things. If I try to set the issuer URL to http://authentik-server:9000/application/o/immich/ in Immich, my browser doesn’t know how to reach authentik-server:9000.

If I set it to http://authentik.localhost/application/o/immich/, Immich doesn’t know how to reach authentik.localhost.

What’s the best way to approach this? I think one way would be to put Immich on the host network so that it’d know how to reach authentik.localhost, but I’d like to keep things as similar to the production environment as possible.

Just some quick ideas without getting into details.

You can add domain aliases to user-defined (including those created by compose) Docker networks so you can access containers using any domain names you prefer.

https://docs.docker.com/reference/compose-file/services/#aliases

Or you can add the service name to your hosts file and use the same port externally as internally in the container.

Third option would be public DNS servers like https://nip.io/ and use the IP of your host machine included in the domain so the server can resolve that to your IP. The only thing it would do is give you a way to use a domain name that you can set in your caddy.

You can also asign static IP to containers: https://docs.docker.com/reference/compose-file/services/#ipv4_address-ipv6_address

As long as you are using Docker CE on Linux and not Docker Desktop, you can access the IP addresses from the host the same way as your containers can do that so you can configure the domain to that static IP either in your hosts file or using the mentioned DNS server.