How to setup laravel in docker at production server

My docker-compose.yml file

version: '3'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        user: testuser
        uid: 1000
    container_name: test-app
    restart: unless-stopped
    volumes:
      - ./:/var/www
    networks:
      - default

  nginx:
    image: nginx:alpine
    container_name: test-nginx
    restart: unless-stopped
    volumes:
      - ./:/var/www
      - ./docker-compose/nginx:/etc/nginx/conf.d/
    ports:
      - 8000:80
    networks:
      - default

networks:
  default:
    driver: bridge

1.I already have a database in DigitalOcean, so I want to connect it to this database. 2.Then how to setup this application on production server

2 Likes

Laravel is an PHP Framework.
So you will also need a Container with PHP
https://hub.docker.com/_/php

This may help to guide you:

Alternatively, you can also use NGINX unit

DigitalOcan has several products with different databases. Which product and which database do you mean? A MySQL Database Clusters?

If you have more than one project or several domains then you should also plan another container with NGINX as a reverse proxy or alternatively HA proxy

https://hub.docker.com/_/haproxy/

1 Like

No one knows what your mysterious app service/container does, you should probably share the Dockerfile.

My main question is, if it is serving web pages by itself or if it needs a separate web server.

1 Like

Thanks for your reply sir

This is my dockerfile

# Start from a PHP image with Apache
FROM php:8.2-fpm

# Arguments defined in docker-compose.yml
ARG user
ARG uid

# Install system dependencies
RUN apt-get update && apt-get install -y \
    git \
    curl \
    libpng-dev \
    libonig-dev \
    libxml2-dev \
    zip \
    unzip \
    default-mysql-client  # Install MySQL client

# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd

# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer

# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
    chown -R $user:$user /home/$user

# Set working directory
WORKDIR /var/www

USER $user

So i want to set this prodcution server
how is possible

1 Like

The Dockerfile seems to create an image with web server included, so it should be a self-sufficient container. You can just open the port.

Why do you want the proxy? The usual two reasons are use of multiple domains/services and TLS management. For those two I think nginx-proxy and it’s LetsEncrypt companion are a really easy solution.

1 Like

I have already set on local.That is perfectly worked for me
I use the DB host on .env

DB_HOST=host.docker.internal

And this is my server local host 127.0.0.1:8010/

port 8010 defined on our nginx services on docker-compose.yml file

This all worked for me on local


Now what i do for our production server

Looks like you are using Docker Desktop on Windows. Right?

Which DigitalOcean product are you already using? A Virtual machine?

1 Like

I’m using Docker Desktop on Linux.

Already using apache server in digital ocean and mysql workbench, php. LAMPP

Okay, I assume you have created the LAMP stack on a virtual machine.

Then you can install Docker first

Next I would try everything out first and keep the LAMP server running. Note that you first need to make sure that there are no port conflicts with the running LAMP stack. So adjust the port of the web server in the docker-compose.yml file. Then transfer the required files to the server, for example with SFTP.

Execute the “docker compose up” command where the transferred files are located. Test everything and if it is OK, you can stop Apache, change the port in docker-compose.yml and then execute “docker compose up --build”. Check it again.

If something is not clear or errors occur, just ask here in this context.

However, an TSL/SSL certificate would certainly be useful. Have you bought one? Or do you use Lets Encrypt?

If you have another project later, for example a WordPress or Symfony project, then you can add additional containers/services and it would probably make sense to use a reverse proxy on ports 80 and 443 to map Domain Names or paths to the containers.

In the future you could also have a ci/cd automation to manage test containers and production containers. And you can also make your Service heigh avalible with Docker Swarm. But you should built this step by step

2 Likes

Thank you for your help sir


I’ll sir

You are welcome.

I also found some usefull informations how to use Compose in production under:

I see only PHP FPM that doesn’t have a webserver despite the comment at the beginning of the file.

It is not really useful to delete the cache in a new RUN instruction as it will not physically remove it, just hide it in the next layer. Run it in the same instruction in which apt-get update and install was executed.

You already got useful tips, so I just want to point out that “production” could mean many things. So it is hard to tell you what you need. You will rather get suggestions depending on what others mean by “production” without knowing anything about your environment and requirements.

I have to point out that the first paragraph of the readme contains

It is not recommended to depend on this script for deployment to production systems

So it would be a better link

1 Like

There is an NGINX web server as a separate container. We cannot see the configuration of NGINX. But chithirakumarm said it all works. Therefore, we can assume that the requests for PHP are forwarded from NGINX to the PHP container.

I heard about the script in a tutorial by Bret Fisher, and he recommended the script and, if I remember correctly, advised against installing it via the distribution because it is not up to date and therefore some features are not available.

Why is the script not intended for production use? What is meant by that?

I could imagine that it is not suitable for installing Docker in automated environments. Because the script can change in the future and things can happen that you don’t expect.

Or is it simply that the Docker versions used in the respective distributions are tested for compatibility with the distribution?

I would even go so far as to say that it is better to use the script because it ensures that you use the latest version locally for testing and the same version in the production environment. At least if the Linux distribution in the test environment is not the same as on the production system.

1 Like

I only replied to the statement that the image contains the webserver, which is not the case. I had no intention to disprove anything else.

The script can be used in a tutorial, when all you want is Docker installed on your machine. I’m sure Bret Fisher would not recommend using the script in production. Regarding “installing it via distribution” he probably meant that Ubuntu has its own apt package for Docker and also a Snap package. None of those are recommended and the official documentation starts with the instructions to uninstall docker.io which is Docker CE fom Ubuntu’s own repository. The script would use the official repository provided by Docker Inc, so it could be better than using the one recommended by Ubuntu which caused a lot of trouble for users already.

I wouldn’t use a script in production unless it is well-documented and officially announced to be supported. Otherwise it is a kind of black box. You don’t know what and why it does (yes, you can check the content and try to understand it), you just hope it will install a usable version and will not break anything. There are some instructions at the beginning of the script file to install a specific version for example, but it also explains the risks:

https://github.com/docker/docker-install/blob/3ea1bdc980d1bc31e0297f5225b4bda4cbdbd07e/install.sh#L5

# The script:
#
# - Requires `root` or `sudo` privileges to run.
# - Attempts to detect your Linux distribution and version and configure your
#   package management system for you.
# - Doesn't allow you to customize most installation parameters.
# - Installs dependencies and recommendations without asking for confirmation.
# - Installs the latest stable release (by default) of Docker CLI, Docker Engine,
#   Docker Buildx, Docker Compose, containerd, and runc. When using this script
#   to provision a machine, this may result in unexpected major version upgrades
#   of these packages. Always test upgrades in a test environment before
#   deploying to your production systems.
# - Isn't designed to upgrade an existing Docker installation. When using the
#   script to update an existing installation, dependencies may not be updated
#   to the expected version, resulting in outdated versions.

Trust me, you don’t always want to install the latest version in a production environment as long as multiple versions are supported.:slight_smile: But that could be another discussion. I would rather install a known to be stable version and install the same in the dev environment and also have some kind of sandbox where I can test the latest features.

2 Likes

Thank you for all replies. Don’t confuse anything seniors


Local - If local, I put 127.0.0.1:8000 on this host in server. DB_HOST=host.docker.internal.
Why I’m using host.docker.internal? Because i connected existing db on mysql work bench in local, so i don’t write anything about db connection in my docker-compose.yml file, only write my .env file. Locally working fine is everything.

Same thing how to set my production server

Production - I have one domain; an example name is test.com. But the Nginx service has one port, 8000:80. And then all files is same.

What i do?

docker.host.internal only works with Docker Desktop.

Usually you would use the service name (or container internal hostname) in production, when connecting to a different container. Docker provides an internal DNS service to enable this.

Best practice is to use a Docker network, and not expose ports except of the proxy or application to the outside world.

I run MySQL in a container in production so I can use the service name to access it on a common Docker network. If you want to use something like host.docker.internal but with Docker CE, there is a way to implement something similar

Sorry, it was very late and I didn’t read carefully.

I looked it up again because it was a long time ago. Docker desktop is of course recommended for the tutorial and not docker engine. In a separate video, he talks about productive servers and that you should fist time install Docker manually using the official instructions, but then you can use the script. He says he uses the script himself a lot.

https://www.udemy.com/course/docker-mastery/learn/lecture/33241966#overview

As far as I can tell, the script does almost the same thing in my case as it does in the official installation instructions. I understand that there are reasons not to use the script, but in this case it should be fine.The repository of the script is solely maintained and apparently regularly (last update 2 days ago) by Docker, Inc. So the risk should be low at this time and in this case. He can test everything before take down the LAMP stack.

Yes, of course you’re absolutely right!

There are some cases like a MySQL Database Cluster, Amazon RDS, Google Cloud SQL, Azure SQL Database or a managed Cloud Database Service to do so.

But connecting to a database instance on the local server on which Docker is running makes no sense. An isolated container with a database instance should be preferred.

I’m just not quite sure which one Database @chithirakumarm means in the production system. Would you like to continue using your old database of the LAMP-Stack or is it a separate DigitalOcean product?

I would recommend moving the old database from the LAMP stack to a container with a new database. You have to create the users and passwords in the new database. And then create a backup of the old database with mysqldump and import this backup to the new database server.

He is a Docker Captain and knows exactly what he is doing. You can always do things that are not recomended when you know the risks and you know how to handle anything happens. If beginners start to do things that are not recommended even by the authors, they won’t be able to decide when they should and shouldn’t do that. Your link required authentication, but even after login I don’t know where I should look for the statement you are referring to. I don’t have subscription. Without knowing exactly what the statement was and what the context was, I can’t say much. All I can say, that in production we should recommend tools made for prodcution environments. A script can also have bugs. Let’s say a simple error in a condition was not found when the developers tested it. Since the script wants to do many things, there is a bigger chance that it fails on some systems. I could continue finding other reasons, but I don’t think it is a really useful discussion in this topic to explain why we sshould not use not officially recommended ways :slight_smile: Escpecially in production.

Databases can be exceptions, of course and even other kind of aplications, especially when the protocol is not supported by the application is not supported behind a reverse proxy (for example when an old app can’t check the remote IP properly), but I believe @bluepuma77 would agree with that too, but using a reverse proxy in production instead of running everything in different ports is indeed the best practice.

But how to connect digital ocean managed database. My digital ocean have lamp

DB_CONNECTION=mysql
DB_HOST=xxx.xxx.xxx.xxx #secret_host
DB_PORT=3306
DB_DATABASE=secret_database
DB_USERNAME=secret_username
DB_PASSWORD=secret_password