Caddy with Nuxt and internal API

image

I have deployed this setup with the desire to expose the nuxt build through the caddy reverse-proxy. AND not expose the API.

Both nuxt and the api are on a bridge network.

compose.yaml

networks:
  prod:
    name: prod
    driver: bridge
services:
  nuxt:
    container_name: nuxt
    ports: 
      - "3000:3000"
    build:
      dockerfile: Dockerfile.prod
      context: ../nuxt
    depends_on:
      - api
    networks:
      - prod
    environment:
      - FASTAPI_BASE_URL=api:8000
  api:
    container_name: fastapi
    build:
      context: ../fastapi
    networks:
      - prod
    ports: 
    - "8000:8000"

api’s Dockerfile

FROM python:3.10

WORKDIR /app

COPY requirements.txt .

RUN pip install -r requirements.txt

COPY . /app

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

nuxt’s Dockerfile.prod

### STAGE 1: Build ###
FROM node:latest AS build
COPY . /app
WORKDIR /app
RUN npm install
RUN npm run generate

### STAGE 2: ENV ###
FROM build AS base
WORKDIR /app
COPY --from=build /app/.output/public /app
COPY .env /app/.env
EXPOSE 3000
CMD ["npx", "serve", "./"]

Caddyfile

{
        email info@example.com
}

app.example.com {
        reverse_proxy localhost:3000
}

The Nuxt build works, but the api calls that it makes is made to app.example.com.

I’m sure there are best practices of this somewhere, but I haven’t been able to find any.

Thank you for reading and here’s to hoping you know what’s missing!

I haven’t used Caddy yet, so can you explain why you think it should work? What do the configuration parameters do? All I see referring to localhost localhost is different in each container. And I see no reference to the API at all. So what is your expectation and based on what?

The question boils down to:

Is it possible, from within the same host, but two different containers to have this setup:

  • One container is available externally (nuxt)
  • One container isn’t (API)
  • Yet both are accessible to each other

I assume there is a way, but there might not be since I didn’t find anything conclusive online.

And if there’s a way, there must be an optimal way to that setup

I might be wrong, that’s what I’m trying to confirm here :blush:

On user-defined networks like those created by Docker Compose DNS resolution works for containers and you can use the compose service name or container name as domain name.

Hello, I know this answer is late, but I believe in order to keep your ports blocked externally you should express them differently in your Docker Compose file – instead of opening the port to external network traffic like 8000:8000 you can only allow access to this port on the private docker network with simply 8000.

I’m running Caddy + Nuxt + API (Rails) and I don’t allow Rails external network access, since the only container needing to access Rails is my Caddy proxy container. However, from your diagram you may not be running Caddy as a docker container like me? In my setup the only ports I leave open externally are on the Caddy container per their documentation.

Something similar to:

  caddy:
    image: caddy:2.9.1
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"

  nuxt:
    ports: 
      - 3000

  api:
    ports: 
      - 8000

Below is my Caddyfile if it helps

{$DOMAIN:localhost} {
	handle /api/* {
		reverse_proxy api:8000
	}

	handle {
		reverse_proxy nuxt:3000
	}

	log {
		format console # default is json
		output stdout  # send to Docker logs
	}
}