Increasing the use of container ram in docker-compose

I wrote the frontend application with react. But the application I wrote does not work in the container.

I found the reason causing stack overflow. So, I found it to be related to the use of ram.

How can I use the command structure at the address below in my docker-container? Does the structure in the link I gave solve the problem? can you help me ?

Node.js: what is ENOSPC error and how to solve?

Does increasing the container ram limit solve the problem?

Could you help ?

Dockerfile.dev

 FROM node:alpine
 WORKDIR '/app'
 COPY package.json .
 RUN npm install
 COPY . . 
 EXPOSE 3000
 CMD ["npm", "run", "start"]

docker-compose.yml

 version: '3'
 services:
   nginxproxy:
     build:
       context: .
       dockerfile: Dockerfile.lets
     container_name: nginxproxy
     networks:
      - nginx_network
     restart: always
     expose:
       - 80
     ports:
       - "443:443"
       - "80:80"
     environment:
       DOMAIN: mySite.com
       EMAIL: mymailaddress@hotmail.com
       RENEW_INTERVAL: 12h
     volumes:
       - ./certificates:/usr/share/nginx/certificates
       - ./default.conf:/etc/nginx/conf.d/default.conf

   web:
     build:
       context: .
       dockerfile: Dockerfile.dev
     container_name: web
     networks:
      - nginx_network
     expose:
       - 3000
     ports:
       - "3000:3000"
     depends_on:
       - nginxproxy
     volumes:
       - /app/node_modules
       - .:/app
   tests:
     build:
       context: .
       dockerfile: Dockerfile.dev
     container_name: tests
     volumes:
       - /app/node_modules
       - .:/app
     command: ["npm","run","test"]


 networks:
   nginx_network:
     driver: bridge

result

 root@docker-frontend:~/frontend# docker-compose up
 Creating network "frontend_nginx_network" with driver "bridge"
 Creating network "frontend_default" with the default driver
 Creating nginxproxy ... done
 Creating tests      ... done
 Creating web        ... done
 Attaching to nginxproxy, web, tests
 nginxproxy    | Generating RSA private key, 4096 bit long modulus (2 primes)
 nginxproxy    | .................................++++
 nginxproxy    | .......................................++++
 nginxproxy    | e is 65537 (0x010001)
 nginxproxy    | ./nginx-letsencrypt: line 13: sl: not found
 nginxproxy    | req: Skipping unknown attribute "EMAIL"
 nginxproxy    | Signature ok
 nginxproxy    | subject=C = PT, ST = World, L = World, O = mySite.com, OU = mySite, CN = mySite.com
 nginxproxy    | Getting Private key
 nginxproxy    | Setting up watches.
 nginxproxy    | Watches established.
 nginxproxy    | 2020/05/02 20:06:16 [emerg] 11#11: host not found in upstream "web:3000" in /etc/nginx/conf.d/default.conf:2
 nginxproxy    | nginx: [emerg] host not found in upstream "web:3000" in /etc/nginx/conf.d/default.conf:2
 web           | 
 web           | > customerfollow@1.0.0 start /app
 web           | > react-scripts start
 web           | 
 tests         | 
 tests         | > customerfollow@1.0.0 test /app
 tests         | > react-scripts test
 tests         | 
 tests         | No tests found, exiting with code 0
 tests         | 
 web           | [HPM] Proxy created: /api/token/  -> http://localhost:8000/
 web           | ℹ 「wds」: Project is running at http://172.20.0.3/
 web           | ℹ 「wds」: webpack output is served from 
 web           | ℹ 「wds」: Content not from webpack is served from /app/public
 web           | ℹ 「wds」: 404s will fallback to /
 web           | Starting the development server...
 web           | 
 web exited with code 0
root@docker-frontend:~/frontend# docker container ps -a
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS                      PORTS                                      NAMES
f6cc62675057        frontend_web          "docker-entrypoint.s…"   22 minutes ago      Exited (0) 22 minutes ago                                              web
ff958840aa9f        frontend_tests        "docker-entrypoint.s…"   22 minutes ago      Up 22 minutes               3000/tcp                                   tests
05a292cc63e3        frontend_nginxproxy   "./nginx-letsencrypt…"   22 minutes ago      Up 22 minutes               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginxproxy

Your compose.yml does not have any ressource constraints for cpu/ram usage (for compose deployments; for swarm deployments), the container use all cpu/ram ressources available to the host.

In production people usualy define cpu and ram resource contraints to achive a predictable resource usage.

Make sure to that your upload folder uses a volume to store the uploaded data. If necessary, apply the suggested inotify related changes directly in your host os…

hello text,

How can I add the following code to the docker-compose.yml file?

The docker container worked with ulimit nofile. but I have to add it to the compose file

docker run -it --ulimit nofile=1024 frontend_web

Are you aware of the Compose file version 3 reference. You should make it a habit to look for configuration elements.

According to the reference, following is available even for the v3 spec:

version: '3'
  your_service:
    ...
    ulimits:
      nproc: 65535
      nofile:
        soft: 20000
        hard: 40000
    ...

@meyay

I am getting a “bad gateway” error again.
Why does react application not stand up

where am i making a mistake

Please share your default.conf. Sidenote: people usualy map an nginx.conf from the host to /etc/nginx/nginx.conf inside the container.

upstream frontend_server{
      server web:3000;
}

server{
  listen 80;
  resolver 127.0.0.11;

  location /.well-known/acme-challenge/ {
      root /var/www/certbot;
  }
  location / {
      return 301 https://$host$request_uri;
  }
}

# SSL
server {
    listen 443 ssl;
    resolver 127.0.0.11;

    ssl_certificate /usr/share/nginx/certificates/fullchain.pem;
    ssl_certificate_key /usr/share/nginx/certificates/privkey.pem;
    include /etc/ssl-options/options-nginx-ssl.conf;
    ssl_dhparam /etc/ssl-options/ssl-dhparams.pem;

  location / {
       proxy_pass http://frontend_server;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header Host $host;
       proxy_redirect off;

       #Websocket support
       proxy_http_version 1.1;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
  }
}

Hmm, your nginx.conf looks about right. You even got the resolver for the docker network internal dns server covered.

I don’t remember ever using proxy_redirect off; and I am not sure if the missing space in upstream frontend_server{ has any negativ affect. Appart from that everything looks fine.

This is how I used to make the reverse proxy rule:

Use to make, because I switched to Traefik and it’s container/service label based rules…

yes everything is correct, but when i run it with “docker-compose up -d” one of the containers is closed.

it runs first, then exits from the container.

root@docker-frontend:~/frontend# docker container ps -a
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                      NAMES
beaa5cf3bb63        frontend_web          "docker-entrypoint.s…"   47 seconds ago      Up 4 seconds        0.0.0.0:3000->3000/tcp                     web
c8edcf339cc0        frontend_tests        "docker-entrypoint.s…"   54 seconds ago      Up 15 seconds       3000/tcp                                   tests
a5183affebea        frontend_nginxproxy   "./nginx-letsencrypt…"   54 seconds ago      Up 1 second         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginxproxy
root@docker-frontend:~/frontend# docker container ps -a
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS                     PORTS                                      NAMES
beaa5cf3bb63        frontend_web          "docker-entrypoint.s…"   52 seconds ago      Exited (0) 3 seconds ago                                              web
c8edcf339cc0        frontend_tests        "docker-entrypoint.s…"   59 seconds ago      Up 19 seconds              3000/tcp                                   tests
a5183affebea        frontend_nginxproxy   "./nginx-letsencrypt…"   59 seconds ago      Up 6 seconds               0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginxproxy

This explains why the reverse proxy has a problem with the upstream :slight_smile: So there was nothing wrong with it in the first place.

I could have helped you with the reverse proxy part, I am not able to help you with the react part. You need to check the logs of the web container and figure out what leads to the exit of the container.

I am now back to the previous version. But it gives the same error in the previous version.
I think it’s not reactjs error. as I understand the container does not work