DigitalOcan has several products with different databases. Which product and which database do you mean? A MySQL Database Clusters?
If you have more than one project or several domains then you should also plan another container with NGINX as a reverse proxy or alternatively HA proxy
# Start from a PHP image with Apache
FROM php:8.2-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
default-mysql-client # Install MySQL client
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
So i want to set this prodcution serverâŠhow is possible
The Dockerfile seems to create an image with web server included, so it should be a self-sufficient container. You can just open the port.
Why do you want the proxy? The usual two reasons are use of multiple domains/services and TLS management. For those two I think nginx-proxy and itâs LetsEncrypt companion are a really easy solution.
Okay, I assume you have created the LAMP stack on a virtual machine.
Then you can install Docker first
Next I would try everything out first and keep the LAMP server running. Note that you first need to make sure that there are no port conflicts with the running LAMP stack. So adjust the port of the web server in the docker-compose.yml file. Then transfer the required files to the server, for example with SFTP.
Execute the âdocker compose upâ command where the transferred files are located. Test everything and if it is OK, you can stop Apache, change the port in docker-compose.yml and then execute âdocker compose up --buildâ. Check it again.
If something is not clear or errors occur, just ask here in this context.
However, an TSL/SSL certificate would certainly be useful. Have you bought one? Or do you use Lets Encrypt?
If you have another project later, for example a WordPress or Symfony project, then you can add additional containers/services and it would probably make sense to use a reverse proxy on ports 80 and 443 to map Domain Names or paths to the containers.
In the future you could also have a ci/cd automation to manage test containers and production containers. And you can also make your Service heigh avalible with Docker Swarm. But you should built this step by step
I see only PHP FPM that doesnât have a webserver despite the comment at the beginning of the file.
It is not really useful to delete the cache in a new RUN instruction as it will not physically remove it, just hide it in the next layer. Run it in the same instruction in which apt-get update and install was executed.
You already got useful tips, so I just want to point out that âproductionâ could mean many things. So it is hard to tell you what you need. You will rather get suggestions depending on what others mean by âproductionâ without knowing anything about your environment and requirements.
I have to point out that the first paragraph of the readme contains
It is not recommended to depend on this script for deployment to production systems
There is an NGINX web server as a separate container. We cannot see the configuration of NGINX. But chithirakumarm said it all works. Therefore, we can assume that the requests for PHP are forwarded from NGINX to the PHP container.
I heard about the script in a tutorial by Bret Fisher, and he recommended the script and, if I remember correctly, advised against installing it via the distribution because it is not up to date and therefore some features are not available.
Why is the script not intended for production use? What is meant by that?
I could imagine that it is not suitable for installing Docker in automated environments. Because the script can change in the future and things can happen that you donât expect.
Or is it simply that the Docker versions used in the respective distributions are tested for compatibility with the distribution?
I would even go so far as to say that it is better to use the script because it ensures that you use the latest version locally for testing and the same version in the production environment. At least if the Linux distribution in the test environment is not the same as on the production system.
I only replied to the statement that the image contains the webserver, which is not the case. I had no intention to disprove anything else.
The script can be used in a tutorial, when all you want is Docker installed on your machine. Iâm sure Bret Fisher would not recommend using the script in production. Regarding âinstalling it via distributionâ he probably meant that Ubuntu has its own apt package for Docker and also a Snap package. None of those are recommended and the official documentation starts with the instructions to uninstall docker.io which is Docker CE fom Ubuntuâs own repository. The script would use the official repository provided by Docker Inc, so it could be better than using the one recommended by Ubuntu which caused a lot of trouble for users already.
I wouldnât use a script in production unless it is well-documented and officially announced to be supported. Otherwise it is a kind of black box. You donât know what and why it does (yes, you can check the content and try to understand it), you just hope it will install a usable version and will not break anything. There are some instructions at the beginning of the script file to install a specific version for example, but it also explains the risks:
# The script:
#
# - Requires `root` or `sudo` privileges to run.
# - Attempts to detect your Linux distribution and version and configure your
# package management system for you.
# - Doesn't allow you to customize most installation parameters.
# - Installs dependencies and recommendations without asking for confirmation.
# - Installs the latest stable release (by default) of Docker CLI, Docker Engine,
# Docker Buildx, Docker Compose, containerd, and runc. When using this script
# to provision a machine, this may result in unexpected major version upgrades
# of these packages. Always test upgrades in a test environment before
# deploying to your production systems.
# - Isn't designed to upgrade an existing Docker installation. When using the
# script to update an existing installation, dependencies may not be updated
# to the expected version, resulting in outdated versions.
Trust me, you donât always want to install the latest version in a production environment as long as multiple versions are supported. But that could be another discussion. I would rather install a known to be stable version and install the same in the dev environment and also have some kind of sandbox where I can test the latest features.
Thank you for all replies. Donât confuse anything seniorsâŠ
Local - If local, I put 127.0.0.1:8000 on this host in server. DB_HOST=host.docker.internal.
Why Iâm using host.docker.internal? Because i connected existing db on mysql work bench in local, so i donât write anything about db connection in my docker-compose.yml file, only write my .env file. Locally working fine is everything.
Same thing how to set my production server
Production - I have one domain; an example name is test.com. But the Nginx service has one port, 8000:80. And then all files is same.
docker.host.internal only works with Docker Desktop.
Usually you would use the service name (or container internal hostname) in production, when connecting to a different container. Docker provides an internal DNS service to enable this.
Best practice is to use a Docker network, and not expose ports except of the proxy or application to the outside world.
I run MySQL in a container in production so I can use the service name to access it on a common Docker network. If you want to use something like host.docker.internal but with Docker CE, there is a way to implement something similar
Sorry, it was very late and I didnât read carefully.
I looked it up again because it was a long time ago. Docker desktop is of course recommended for the tutorial and not docker engine. In a separate video, he talks about productive servers and that you should fist time install Docker manually using the official instructions, but then you can use the script. He says he uses the script himself a lot.
As far as I can tell, the script does almost the same thing in my case as it does in the official installation instructions. I understand that there are reasons not to use the script, but in this case it should be fine.The repository of the script is solely maintained and apparently regularly (last update 2 days ago) by Docker, Inc. So the risk should be low at this time and in this case. He can test everything before take down the LAMP stack.
Yes, of course youâre absolutely right!
There are some cases like a MySQL Database Cluster, Amazon RDS, Google Cloud SQL, Azure SQL Database or a managed Cloud Database Service to do so.
But connecting to a database instance on the local server on which Docker is running makes no sense. An isolated container with a database instance should be preferred.
Iâm just not quite sure which one Database @chithirakumarm means in the production system. Would you like to continue using your old database of the LAMP-Stack or is it a separate DigitalOcean product?
I would recommend moving the old database from the LAMP stack to a container with a new database. You have to create the users and passwords in the new database. And then create a backup of the old database with mysqldump and import this backup to the new database server.
He is a Docker Captain and knows exactly what he is doing. You can always do things that are not recomended when you know the risks and you know how to handle anything happens. If beginners start to do things that are not recommended even by the authors, they wonât be able to decide when they should and shouldnât do that. Your link required authentication, but even after login I donât know where I should look for the statement you are referring to. I donât have subscription. Without knowing exactly what the statement was and what the context was, I canât say much. All I can say, that in production we should recommend tools made for prodcution environments. A script can also have bugs. Letâs say a simple error in a condition was not found when the developers tested it. Since the script wants to do many things, there is a bigger chance that it fails on some systems. I could continue finding other reasons, but I donât think it is a really useful discussion in this topic to explain why we sshould not use not officially recommended ways Escpecially in production.
Databases can be exceptions, of course and even other kind of aplications, especially when the protocol is not supported by the application is not supported behind a reverse proxy (for example when an old app canât check the remote IP properly), but I believe @bluepuma77 would agree with that too, but using a reverse proxy in production instead of running everything in different ports is indeed the best practice.