How do I solve the rewrite problem?

Hello,
The Dockerfile is as follows:

FROM php:8.3-fpm


RUN apt-get update && apt-get install -y \
    build-essential \
    libpng-dev \
    libjpeg62-turbo-dev \
    libfreetype6-dev \
    libonig-dev \
    libzip-dev \
    zip \
    unzip \
    git \
    curl \
    net-tools \
    ncat \
    && apt-get clean && rm -rf /var/lib/apt/lists/*


RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install gd


RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer


RUN groupadd -g 1000 www && \
    useradd -u 1000 -ms /bin/bash -g www www


WORKDIR /var/www


COPY --chown=www:www . .


ENV APP_ENV=local
ENV APP_DEBUG=true


RUN composer install --no-dev --no-interaction --optimize-autoloader --no-scripts


USER www


RUN php artisan package:discover || true

EXPOSE 9000
CMD ["php-fpm"]

The Dockerfile compiles all the files and then copies them to the /var/www directory.

The docker-compose.yml is as follows:

services:
  # PHP-FPM Service
  backend:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: backend
    restart: unless-stopped
    environment:
      APP_ENV: local
      APP_DEBUG: "true"
      DB_HOST: db
      DB_PORT: 3306
      DB_DATABASE: laravel_db
      DB_USERNAME: laravel_user
      DB_PASSWORD: secret
#    volumes:
#      - ./:/var/www
    ports:
      - 9000:9000
    networks:
      - app_network

  # Web Server
  webserver:
    image: nginx:latest
    container_name: webserver
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./:/var/www
      - ./nginx/conf.d:/etc/nginx/conf.d
    networks:
      - app_network
    depends_on:
      - backend

  # Database
  db:
    image: mariadb:latest
    container_name: db
    restart: unless-stopped
    environment:
      MARIADB_ROOT_PASSWORD: rootsecret
      MARIADB_DATABASE: laravel_db
      MARIADB_USER: laravel_user
      MARIADB_PASSWORD: secret
    volumes:
      - mariadb_data:/var/lib/mysql
      - ./mysql/my.cnf:/etc/mysql/my.cnf
    ports:
      - "3306:3306"
    networks:
      - app_network


networks:
  app_network:
    driver: bridge

volumes:
  mariadb_data:
    driver: local

Now Nginx is copying and executing the unchanged file that is in the current directory. Am I right?

Thank you.

Why should nginx “copy” something? It seems you want to use nginx as web server, which should use " FastCGI Process Manager" image to serve a PHP application. The behaviour probably depends on the settings of ./nginx/conf.d.

Personally I would just use php:8.3-apache to avoid having an additional web-server container like nginx to serve PHP.

You might need an additional reverse proxy like nginx or Traefik if you want to run multiple services on the same host with multiple domains/paths.

1 Like

Hello,
Thank you so much for your reply.
The Nginx configuration is:

server {
    listen 80;
    index index.php index.html;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /var/www/public;
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass backend:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
    location / {
        try_files $uri $uri/ /index.php?$query_string;
        gzip_static on;
    }
}

If I don’t copy the files to Nginx (./:/var/www), then nothing will be shown.

The webserver container can see the backend container:

# nc backend -v 9000
Ncat: Version 7.93 ( https://nmap.org/ncat )
Ncat: Connected to 172.19.0.2:9000.
#
# ping backend
PING backend (172.19.0.2) 56(84) bytes of data.
64 bytes from backend.laravel_app_network (172.19.0.2): icmp_seq=1 ttl=64 time=0
.123 ms
64 bytes from backend.laravel_app_network (172.19.0.2): icmp_seq=2 ttl=64 time=0
.096 ms
64 bytes from backend.laravel_app_network (172.19.0.2): icmp_seq=3 ttl=64 time=0
.090 ms
^C
--- backend ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2037ms
rtt min/avg/max/mdev = 0.090/0.103/0.123/0.014 ms

Maybe you should read about fpm. It’s a server with special protocol, which nginx connects to. No files are “copied” to nginx.

Hi,
Thanks again.
Does the Dockerfile copy the files to the project folder after compilation, etc., or to the image it creates? If it copies to its own image, then enabling the volumes section will pass the raw files to Nginx for display.

If I copy the files to Nginx then why is there a need for fastcgi_pass backend:9000;?

A Dockerfile is used to build container images, it will usually write into the container image, not to the host. Then you run the container image which includes the files.

If you want to “compile” files and make them available, you would usually handle that in a running container, not in a Dockerfile.

It seems this is about a PHP application. The plain nginx Docker image can’t run PHP files, it will just serve static html, JS, CSS and image files. So you need to use FPM or need to customize nginx (add an interpreter) or use a different image.

Hi,
Thanks again.
1- As you can see, I’m using FPM.

2- Which Nginx image does this?

3- This is a Lavarel project. How do you do this project with Docker?

You did not state what you want to achieve. You got PHP, Laravel, FPM and nginx. How did you get to this combination of components? Why insist on FPM and nginx, if there are easier solutions available?

We don’t really do 1:1 tutoring here, we help solving problems. I recommend to start by reading or watching some tutorials to understand the basics (for example 1, 2, 3).

Hello,
Thanks again.
I want to launch a website (portal) with Docker. This website uses React as Front-END and Lavarel as Back-END. Its database is MariaDB. Since each of these components runs in a separate container, I want to use Nginx to serve them.

React is static, PHP is dynamic and needs a special server.

My approach would be:

  • reverse proxy
    use nginx-proxy or Traefik to handle multiple services
  • nginx or apache as static web-server for react files
    maybe use domain www.example.com
  • php-apache as dynamic web server for PHP files, which need to be interpreted
    maybe use domain api.example.com
  • database
    do not expose ports to outside, use internal Docker network only

Optionally you could let the PHP server also serve the react files out of a single container

1 Like