net::ERR_NAME_NOT_RESOLVED when calling dockerized backend and frontend from browser

I have the following goal: develop an app with React as Frontend and Java (+ spring-boot) as Backend and use AWS EC2 to put the app on internet.

To do so, I am using the following path:

  1. I have my code on my machine: 1 folder with inside 1 folder for the frontend, 1 folder for the backend and 1 folder for nginx. Nginx will be used as reverse proxy. Frontend, Backend and nginx images are on the same docker network
  2. I am building docker images and push this images on dockerhub
  3. I am accessing my EC2 instance via ssh and pull these images.
  4. I run the images on my EC2.

Nevertheless, before achieving this, I first try to make it work locally.
My goal is to run docker and have the frontend able to communicate with the backend.
Since the frontend, backend and nginx are on the same docker network, I used:

  • http://frontend:3000
  • http://backend:8080

for the communication. I tried it by entering each container and ping the other one: it is working.

Next step is using the browser. To do so I access my website via http://localhost:3000.
I set up the reverse proxy to redirect my demand to http://frontend:3000.
But now nothing is working.

When trying in postman: http://localhost:8080/api/auth/signin is working but the http://localhost:3000 is making calls to http://backend:8080/api/auth/signin and of course, unable to access it.

I know that, when requesting for outside docker, we need to use localhost and inside docker network, we can use specific names.

My issue is how should I configure my reverse proxy to be able to communicate from the browser to inside my docker container?

I could have chosen to put localhost everywhere but then, when I will push everything to me EC2 instance, I will have the same issue. My DNS will try to access backend without having access.

For now my nginx setup is the following:

user nginx;
worker_processes auto;

events {
    worker_connections 1024;
}

http {
  upstream coaching-app-fe {
      server frontend:3000;
  }

  upstream coaching-app-be {
      server backend:8080;
  }

  server {
      listen 80;
      server_name localhost;
      return 301 https://$host$request_uri;
  }

  server {
      listen 443 ssl;
      listen [::]:443 ssl;
      server_name localhost;

      ssl_certificate /etc/nginx/fullchain1.pem;
      ssl_certificate_key /etc/nginx/privkey1.pem;

      # Add TLS configuration as needed (SSL protocols, ciphers, etc.)


    location / {
            proxy_pass http://coaching-app-fe;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        # Backend application
        location /api {
            proxy_pass http://coaching-app-be;  # Docker service name and port
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_buffering off;
        }

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
      root   /usr/share/nginx/html;
    }

  }
}

Further details:

Inside my java code I am managing CORS by having on each controller : @CrossOrigin(origins = "http://frontend:3000")

On my frontend ApiService.js file, I have setup the authentification header:

const API_BASE_URL = "http://backend:8080";

console.log("API Base URL:", API_BASE_URL);

const apiService = axios.create({
  baseURL: API_BASE_URL,
  timeout: 50000000000, // You can adjust the timeout
});

// Set the authorization token in headers
export const setAuthToken = (token) => {
  if (token) {
    apiService.defaults.headers.common["Authorization"] = `Bearer ${token}`;
  } else {
    delete apiService.defaults.headers.common["Authorization"];
  }
};

Plus in package.json, I have allowed : "proxy": "http://backend:8080"

So setting up env. variables is an option but I would prefer having host names inside my docker network and the nginx reverse proxy doing all the routing part.

I have been working on this for 3 weeks now. So I am not asking without any research.
My main issue here is having the http://localhost:3000 being able to be re-routed toward http://frontend:3000 inside the container.

If I inspect my network I have:

[
    {
        "Name": "ec2-user_coaching_network",
        "Id": "1547076d276b7b9917680a0bda99a78da2561d8675f6f20569901b3d4bcc6f46",
        "Created": "2024-07-11T07:51:21.940056384Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "69294934cd4a537ae5333edca2d17164430497f8f0598b1aa9e8991e1df89362": {
                "Name": "backend",
                "EndpointID": "a822df96f9dab5f396a0280586f41ad15fa9f2bfe6c104f0c9164e8a9912c1a6",      
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7cc16fbe69c413138335cc5f8fc400e9927c50a68fcbb5fb61ce7bc1309fd32d": {
                "Name": "coaching-portal-proxy",
                "EndpointID": "969e42a9b7d8df355390e478b28d3859649850405e18304076aff86150c1aead",      
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "a29d74c427464fcce0c1f7db31c901cd64e5573a56cc38d5d9d021db5717355e": {
                "Name": "frontend",
                "EndpointID": "566dcecebb3a70a19487aacb918d853a150ad74dfa28f7a67c2a0f09b56a6d7c",      
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Thank you very much for any help,

Alexia

1 Like

Show how you ran the container and how you forwarded host ports to the container port. If you don’t forward the port localhost will not work.

Thank you very much for your reply!

To run the container, I am using the following commands:

docker-compose down
docker system prune -f    #To be sure everything is erased
docker-compose up --build

My docker-composed is the following:

services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile.prod
    container_name: "backend"
    ports:
      - "8080:8080"
    environment:
      - SPRING_PROFILES_ACTIVE=production
      - DB_URL=*****
      - DB_USERNAME=*****
      - DB_PASSWORD=*****
      - AWS_ACCESS_KEY_ID=*****
      - AWS_SECRET_ACCESS_KEY=*****
      - AWS_REGION=*****
    networks:
      - ec2-user_coaching_network

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.prod
    container_name: "frontend"
    depends_on:
      - backend
    ports:
      - "3000:3000"
    networks:
      - ec2-user_coaching_network

  proxy:
    build: ./nginx
    container_name: "coaching-portal-proxy"
    restart: always
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - frontend
      - backend
    networks:
      - ec2-user_coaching_network

networks:
  ec2-user_coaching_network:
    external: true

When running this configuration, I can access:

My frontend dockerfile is the following:

####################################################################
# Stage 1: Build the React app
####################################################################

# Use an official Node.js runtime as a parent image
FROM node:18 AS build

# Set the working directory in the container
WORKDIR /app

# Copy the package.json and package-lock.json files
COPY package*.json ./

# Install dependencies
RUN npm install --legacy-peer-deps

# Copy the rest of the application code
COPY . /app

# Build the React app with the backend URL environment variable
#ARG REACT_APP_API_URL
#ENV REACT_APP_API_URL=$REACT_APP_API_URL
RUN npm run build && ls -la /app/build

####################################################################
# Stage 2: Serve the React app with Nginx

# move builds to nginx and run the front-end
####################################################################

# Use an Nginx image to serve the built app
FROM nginx:alpine

# Copy the built app from the previous stage
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx/nginx.conf /etc/nginx/conf.d

# Expose port 3000
EXPOSE 3000

# Start Nginx server
CMD ["nginx", "-g", "daemon off;"]

My backend dockerfile is the following:

# Use a Maven image to build the project
FROM maven:3.8.5-openjdk-17 AS build

# Set the working directory
WORKDIR /app

# Copy the project files to the container
COPY pom.xml /app
COPY src /app/src

# Build the project
RUN mvn clean package


# Use a JDK image to run the application
FROM openjdk:17-jdk-slim

# Set the working directory
WORKDIR /app

# Copy the built jar from the previous stage
COPY --from=build /app/target/*.jar /app/app.jar

# Expose port 8080
EXPOSE 8080

# Run the application
CMD ["java", "-jar", "/app/app.jar"]

I tried remplacing each URL with localhost instead of the docker host names “backend” and “frontend”. Everything is working fine. But I would like to have a more sustainable code and don’t have to change any URL inside the containers when deploying it to EC2.

Correct me if I’m wrong, becuse I read the questions quickly, but it seems you want a single URL to work from the container and from a Javascript API request from the client side. That will work only if you use the IP address or domain name of your host machine and the host port. In some cases that kind of request coming from the container to the host IP can be blocked by local firewalls. In that case, if the host has a domain name resolved by a DNS server, you can use extra_hosts in the compose file to add that domain name as an alias to the backend container. That modifies the hosts file of the frontend container. Or you can use network aliases

But having an “external url” configuration parameter is not uncommon. Not only in Docker containers, but you could also have a reverse proxy which requires another URL and not what you can use internally, or what you would use internally as directing the traffic to the external proxy could be slower or insecure.

Indeed, I want the reverse proxy to redirect all requests made on localhost:3000 to backend:8080 or frontend:3000 like on the picture.
So let’s say that for now, my host machine is my computer, I need to update my docker-compose to this:

backend:
    build:
      context: ./backend
      dockerfile: Dockerfile.prod
    container_name: "backend"
    ports:
      - "8080:8080"
    extra_hosts:
      - "localhost:3000:77.173.66.201"

Have I understood correctly?

you could also have a reverse proxy which requires another URL

Is it a better “coding” way to do it?

 extra_hosts:
      - "localhost:3000:77.173.66.201"

was not working, the command docker-compose up --build failed; so I tried:

 extra_hosts:
      - "localhost:77.173.66.201"

this time the command docker-compose up --build worked, but my issue remains the same: localhost:3000 is calling backend:8080 so nginx is not forwarding correctly

I guess I still don’t fully understand the issue. My original ideas came from the thought that the backend is not available from the outside, only in the container network. But if your frontend sends requests from the client side using javascript, you will need to use the same domain and the frontend has to be configured to use “localhost” not “backend”. The service name can be used only when two container has to communicate on the server side, not when a frontend sends requests from a web browser. It was in my previous question, but your answer is still not clear to me.

My previous extra host idea was bad by the way for multiple reasons so I don’t start why, but that wouldn’t contain a port number and shouldn’t override localhost. It can be used for other hostnames.

First of all, sorry if my explanation wasn’t well formulated. I will try to explain it better.

To start from the beginning, I want to host my web application on AWS EC2. To do so I have few options:

  • Copy/paste my code from my local machine to my EC2 server using scp
  • Use github to pussh my local code and then pull the code in my EC2
  • Use docker container

I first used localhost everywhere and it was working ver well on my machine: the frontend could send requests to the backend and the backend could reply to the frontend.
Then I wanted to put my code in EC2:

  • Using copy/paste, it is fine but only one time. If each time I change a line of code I need to do that, for sure it is annoying
  • Then, docker was a better option.

I am a complete beginner with docker so I had to understand the configurations, hence my lack of knowledge and mistakes regarding re-routing.
Nevertheless, I still need to update each URL: instead of using localhost on EC2 I used the IP address of my EC2 instance. Everything was working as it should

Next step was to change HTTP to HTTPS => use of nginx.

But I was asking myself “it could be very handy if the frontend and backend could communicate with each other without having to change all the URL. Imagine I could use a reverse proxy that could do all the job for me ! What a wonderful thing !!! I would then just have to update on URL depending if I am in the development or in the production phase”.

Unfortunately, I wish it was as easy. I ended up with an application that is not working.

And now I am trying my best to be able to have a configuration very easy to switch from localhost to EC2 IP address while learning how to properly use docker.

Thank you for the explanation.

So you are basically using Docker to avoid having to pull the code on the server. This is not what Docker is for. You still have to configure your application for the environment. With Docker, you will push the Docker image to a registry and pull the image on the server from the registry, but the image has to support parameters so you can use different parameters in development mode and in production.

You would need a CI/CD process (Continues Intergation / Continues Deployment) that works like this:

  • Push your code to Git
  • Configure a hook on the Git server or configure CI/CD service to periodically check new code so the CI/CD server can build your image, push it to a Docker registry and start the deployment process on the server, which means replacing the image with the new one and if necessary, changing some parameters
  • Wait some seconds / minutes and see your app deployed on the remote server.

This would not be very different without Docker.

Since you are using Docker Compose, instead of the above process, you could have two different compose files. You would need to create those only once as long as the parameters don’t change andjust change the image on the remote when you feel it is ready to publish. Then run docker compose up -d so it downloads the new image and recreates the containers.

I show why I was confused

So I bsically ignored your nginx proxy config as it seemed you didn’t use it, instead of telling you you should. Your proxy listens on port 80 and port 443. so you don’t have to use port 3000 anymore and you don’t need the ports section in the compose file for the backend and the frontend.

Your frontend should be on http://localhost and your backend on http://localhost/api. So your frontend just has to send the request from the browser to the api path whihc means /api regardless of what the domain is. But you can also parse the URL in the frontend JS and gt the protocol and domain from it if you want to.

Okay, I understand perfectly your solution.

So with this approach, I would need to create:

  • 2 docker-compose files: one for development and one for production. I already have created those.
  • Environment variable for the frontend

So I would have, in my docker-compose file :

  • REACT_APP_API_URL=https://51.21.116.236 # Use the EC2 public IP => on EC2
  • REACT_APP_API_URL=http:/localhost => on my machine

On my React code I would have const API_BASE_URL = process.env.REACT_APP_API_URL and I would need also environment variables for the backend to replace @CrossOrigin(origins = "http://localhost").

@rimelek Do I understand correctly?

Yes, I think you do. In your case the API URL variable is not even necessary if the frontend always communicates with the backend api on /api, but it is indeed a good idea to parameterize the api url so if you decide not to use a proxy, you can simply change the variable to any value. Until then, you could make it optional and use “/api” as default if the variable is not defined.

@alexiagross The net::ERR_NAME_NOT_RESOLVED error indicates that the domain name you’re trying to reach cannot be resolved to an IP address.

Ensure that the Docker containers can resolve IP address of the backend service. Docker uses an internal DNS server by default, but network settings might affect this.

docker exec -it <container_id> ping 51.21.116.236

using a domain name for your production setup by mapping the IP address to a domain via Cloudflare or AWS Route 53 and then using that domain as the REACT_APP_API_URL in your React app should work.

Go to the DNS settings on your DNS provider (Cloudflare, godaddy, namecheap, route53
)

  • Add an A record:
  • Name: yourdomain.com
  • IPv4 address: 51.21.116.236

Update the REACT_APP_API_URL with your domain.
REACT_APP_API_URL=https://yourdomain.com

Verify DNS propagation is successful.
nslookup yourdomain.com

Ensure your domain has a valid SSL/TLS certificate if using HTTPS.

Ensure your backend server allows requests from your domain.

const cors = require('cors');
app.use(cors({ origin: 'https://yourdomain.com' }));

@sekhar12322 and @rimelek thank you very much for your time. I really appreciate it!

I wanted a way to avoid using environment variables inside my code and just having a nginx that will redirect the request to the corresponding docker container: frontend and backend code.

I already have that in place:

const cors = require('cors');
app.use(cors({ origin: 'https://yourdomain.com' }));

I just need to create an env variable that will switch from:

@CrossOrigin(origins = "https://yourdomain.com")
@RestController
@RequestMapping("/api")

to

@CrossOrigin(origins = "http://localhost")
@RestController
@RequestMapping("/api")

when I am in development or in production.

The way I wanted to code it is not the way to do it, as I understood.