I’m planning out a server upgrade for an orgainzation which has typically run all apps/services natively, but wants to take advantage of Docker containers. I’m developing this plan on a test server before putting into production. My first step is to set up an Nginx container as a reverse proxy for several subdomains. All communication should happen over SSL, so I’m using this guide to get the certs:
When I start this guide I have no containers running. The guide was written in September of 2023, so not too old (yet still using compose v1). I’m not sure if the guide is missing steps, or perhaps written for an audiance with more Docker experience who can “read between the lines” and understand that some steps that aren’t written in the guide are implied, but things aren’t working the way I would expect them to, which I shall now enumerate:
1.) This is the compose file posted in the guide:
version: '3'
services:
webserver:
image: nginx:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/conf/:/etc/nginx/conf.d/:ro
- ./certbot/www/:/var/www/certbot/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
And this is the command being used to run the compose file:
docker-compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ --dry-run -d [domain-name]
I realize the docker compose run
command operates on a single service as suggested by the command’s help menu’s use of the word SERVICE
in singular form:
typoknig@test-server:~$ docker compose run --help
Usage: docker compose run [OPTIONS] SERVICE [COMMAND] [ARGS...]
But I would expect that this guide would give instructions that would lead the reader to start all containers required to achieve the goal of the guide, which makes me feel like I’m missing something. As it is, only the Certbot container is started when following the guide. To make the guide work, I have to run this command first (I’m using Docker Compose v2):
docker compose run --rm --service-ports -d webserver
Note the addition of the --server-ports
option.
Q1.) Is the writer of the guide expecting that I already have an Nginx container (my “real” Nginx container) running?
Q2.) If “Yes” to Q1, is the writer of the guide intending that my “real” Nginx container have a volume configuration similar to what is shown in the compose file above to facilitate communication with Certbot?
Q3.) If “Yes” to Q2, does this volume configuration pose any security risk in a production environment?
Q4.) Letsencrypt certs expire in 90 days, so if I make a script to renew the certs, and I have my “real” Nginx container running, am I going to have to stop my “real” Nginx container first so the Nginx container defined in the compose file above can use ports 80 and 443?
2.) The guide starts with this Nginx config:
server {
listen 80;
listen [::]:80;
server_name [domain-name] www.[domain-name];
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://[domain-name]$request_uri;
}
}
Then near the end after certs have been obtained, this is added to the config:
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name [domain-name];
ssl_certificate /etc/nginx/ssl/live/[domain-name]/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/[domain-name]/privkey.pem;
location / {
proxy_pass http://[domain-name];
}
}
Resulting in:
server {
listen 80;
listen [::]:80;
server_name [domain-name] www.[domain-name];
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://[domain-name]$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name [domain-name];
ssl_certificate /etc/nginx/ssl/live/[domain-name]/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/[domain-name]/privkey.pem;
location / {
proxy_pass http://[domain-name];
}
}
Q5.) Is the writer of the guide intending this to be the config used by my “real” Nginx container?
Q6.) If “Yes” to Q5, does this Nginx configuration pose any security risk in a production environment? Specifically, the location configuration for .well-known/acme-challenge
?