I’m still relatively new to Docker and I know some of the basic concepts but obviously I want to learn more and get some answers to questions and confusions I have with it. I may answer my own questions in some places but I want to clarify what I’m doing is the correct way. I was initially going to create separate forum posts but I think that would confuse matters with different posts spread everywhere and it would leave people not knowing the full picture and setup.
I will really appreciate it if anyone reads the whole post. I’m not looking for whole answers to every point but if it makes it easier, highlight individual questions/points I’ve made, quote it and reply back.
I have a Rock64 SoC PC that is running Debian 10 Buster with Open Media Vault (OMV) 5 installed on top. Included with OMV 5 is Docker and Portainer to offer a web-GUI to manage it. OMV’s web-GUI runs on NGINX with port 80 so if I wish to run Docker containers alongside OMV that also have built-in web servers also running on port 80/443 I will need to map them to different ports within Docker.
Of course this complicates everything when trying to remember port numbers for multiple containers that have a web server running inside them, therefore I would like to use a reverse proxy with sub-domains to access everything.
My main network is managed by OpenWrt and I have several VLANs setup to segment the various devices. These VLANs include a private LAN, a guest LAN, a IoT LAN, a Servers LAN and a management LAN.
As a side note my Rock64’s hostname is omv-server.lan and will be used throughout this post when referring to domains. Additionally when I refer to application/container stack(s) I will be referring to the group of containers that are defined together in docker-compose files.
My background with computers used to be predominantly Windows but I’ve ventured into Linux, macOS and Unix configuring software stacks to host WordPress sites, host Samba and FTP shares, write scripts to automate everything and much more. I am completely self-taught in Unix and this began with OpenWrt router firmware, editing files with the Nano editor and restating services to pickup the new changes. I then discovered Docker… All I can say is how amazing it is and what an awesome way of doing virtualization.
Docker Images I’m Hoping to Setup
Below is a list of Docker images I am hoping to install but I want to setup a NGINX reverse proxy in Docker to translate the various Docker containers to easily rememberable sub-domains.
– Certbot (Let’s Encrypt)
– Home Assistant
– PHPMyAdmin (access databases across various container stacks)
– NGINX (to be used as the reverse proxy)
– Portainer (web-GUI to manage Docker)
Docker Network Modes
I have been looking at the various network modes and I can say that the host mode definitely wouldn’t work as the vast amount of containers I have running would all clash on port 80. The default bridge mode was the mode I was going to initially go with as it would allow me to map different ports on the host side to the ones in the container. That leaves me with macvlan. From the reading I’ve been doing, macvlan allows me to treat the containers as though they are connected directly to the router/switch where they will receive their own IP address from the DHCP server. This mode mode appeals to me the most as I can treat each device like a physical device so I’m likely to go with this mode.
Despite going with the macvlan mode, in the event I may use bridge mode in the future for other projects, correct me if I’m wrong, with every container listening on their own port on the Docker host, the whole idea of a reverse proxy is to eliminate the need to access the application directly from the backend?
If I connect to the Nextcloud container via port 8080 using the local domain mentioned above, it would be accessible from omv-server.lan:8080. With the reverse proxy up and working I should also be able to access it from cloud.omv-server.lan as well?
What I’m trying to get my head around is, if I need to forward traffic from outside of the LAN (WAN or another VLAN) to these containers I should only need to open 80 and 443? Any devices on the same subnet/VLAN could access both the backend ports and sub-domain, but anything outside this network wouldn’t be aware of it because the ports are inaccessible as the ports are closed?
Creating Separate Docker Networks for Each App Stack
Trying to not complicate things any further here, but I would like to create separate Docker networks for each container stack so they are isolated from one another. As I’m hoping to place all of the containers into the same VLAN and using the macvlan mode this would be completely pointless. On the other hand, I would be interested how this would be done in regards to a reverse proxy whilst working with bridge mode. The one problem that would occur is getting the NGINX reverse proxy container to communicate with all of the other container stacks. For example, Home Assistant would be in a network called home_assistant-network, Nextcloud would be in a network called nextcloud-network and the reverse proxy would be in a network called nginx_reverse_proxy-network. How do I achieve this?
Docker Images and their Sub-domains/Ports
The list below details the sub-domains and the ports each container will listen on. In the case of using macvlan mode the ports can be ignored here but I’ve listed them anyway in case I use bridge mode somewhere:
– Certbot - N/A
– Home Assistant - home.omv-server.lan (8123)
– Pi-hole - dns.omv-server.lan (10053)
– PHPMyAdmin - pma.omv-server.lan (13306)
– Plex - media.omv-server.lan (13240)
– Nextcloud - cloud.omv-server.lan (11080)
– NGINX (10080)
– Portainer - portainer.omv-server.lan (12080)
Setting up Reverse Proxy
The real question I should be asking is, how do I setup a NGINX reverse proxy with macvlans? OMV (the Docker host) and any containers will be placed into the same Servers VLAN with some also being accessible over the WAN such as Nextcloud; Plex for shared libraries; Pi-Hole for friends and family to benefit from the ad-blocking etc and remote access to Home Assistant to control my devices in the IoT VLAN. For my devices on the private LAN to access the resources in these containers I will be setting up inter-VLAN firewall rules. In the very few examples of the Docker macvlan, the guides use static IP addresses to differentiate the containers on the network.
If I was to set a subdomain for OMV web GUI for example and I’m using an NGINX docker container to handle the reverse proxy with the OMV sub-domain included, if Docker crashes and the container is not running would I still be able to to connect to OMV web-GUI via the port?
In OpenWrt I make good use of DHCP reservations using the MAC address of the device to statically assign an IP address. From the guides I have been following on the internet, I did see a Docker inspect command which allows you to see the MAC address that has been assigned to the container. Is there a way of pre-defining the MAC address for each container so that when the container communicates with the DHCP server matching the MAC address it can pull the statically assigned IP address?
Setting up SSL Certificates
The important part of using a reverse proxy that I have not mentioned yet is using it with SSL certificates. I can secure individual containers such as Portainer for example with a SSL certificate, but I was unsure how to implement this behind a reverse-proxy. I made a forum post on Portainer’s forums and to cut a long story short they have recommended to only use SSL certificates on the reverse proxy and not on the container too.
With that said, I would like to encrypt all of my applications on the reverse proxy therefore I will need to easily create multiple certificates. The only free methods for creating certificates are self-signed OpenSSL certificates or using Let’s Encrypt’s free CA. Creating OpenSSL certificates in bulk isn’t much of a problem for me as I’ve already created a script that creates a CA and all of the certificates for each application/site. In this case I would generate the certificates in safe location on the host machine, then copy/move the files to /srv/docker/appdata/nginx_reverse_proxy/openssl and finally point towards those paths in each server block of the nginx.conf file on the nginx_reverse_proxy container.
However, it would be nice to have a authorised certificate authority like Let’s Encrypt enabled across all of my apps but generating them all in bulk doesn’t look to be so easy. From my reading, each application would need to be listening on their respective ports. The first hurdle is with the reverse proxy. If I was to include the file paths to the SSL certificates in the nginx.conf file, as the certificates don’t exist yet NGINX will produce errors and fail to start. On the flip side I need the reverse proxy to redirect to the various applications so that I can get Certbot to work. As one of the guides I have been following states (linked below), is it the chicken or the egg? My solution…
1). Have a docker-compose file for each application stack so they are ready to be run
2). Ensure each application in the docker-compose file has a volume pointing towards the certificates directory located at /srv/docker/appdata/nginx_reverse_proxy/letsencrypt
3). Include both the plain HTTP and secure HTTPS in each server block commenting out the HTTPS/SSL section any HTTP to HTTPS redirecting rules
4). Start up each docker-compose
5). Let Certbot validate each and every application over plain HTTP so that it gets a response and generates a certificate
6). Stop all docker-compose files
7). Remove the comments for the HTTPS/SSL sections in nginx.conf file so the paths to the SSL certificates are now active
8). Start the docker-compose files once again
Would this work? Whichever method I choose I have read that it is recommended to create a Diffie Hellman Secret Key to increase the SSL score. Would something like this on the host be sufficient?
openssl dhparam -out dhparams.pem 4096
Links to the guides I have been following:
- How to Set Up Free SSL Certificates from Let’s Encrypt using Docker and Nginx
- Nginx and Let’s Encrypt with Docker in Less Than 5 Minutes
I have some questions that are focused around organising Docker. I watch a YouTuber called Techno Dad Life who covers a lot of Docker tutorials through Open Media Vault (OMV) and in a lot of his tutorials he utilises a folder called AppData on the root of a HDD to store volumes to all of the Docker containers. He setups a SMB share on that directory with Administrator privileges to be able to edit the files from a Windows PC on the same LAN.
I want to take a similar approach with the addition of keeping custom docker-compose and docker files in their own directories and storing volumes under their group name within the AppData folder to keep everything organised. I can also utilise VSFTPD, SFTP or SMB on that host-side directory to edit the files from a remote PC. For example, this is what the directory tree would look like if I had Home Assistant, Nextcloud, NGINX reverse proxy, Pi-Hole, PHPMyAdmin and Portainer container stacks:
/srv /dev-disk-by-uuid-f53202ce-3eb5-4a88-bbdd-7e3490572f17 /docker /appdata /home_assistant /config /nextcloud /apps /config /data /html /theme /nginx_reverse_proxy /certbot /etc /letsencrypt /var /lib letsencrypt /nginx /data /openssl /phpmyadmin /pi-hole /etc /dnsmasq.d /pihole /portainer /data /data /docker-compose_files home_assistant-docker-compose.yml nextcloud-docker-compose.yml nginx_reverse_proxy-docker-compose.yml pihole-docker-compose.yml /docker_files
Due to Linux handling disks with UUID’s, writing out /dev-disk-by-uuid-f53202ce-3eb5-4a88-bbdd-7e3490572f17 all the time would be frustrating and long-winded. To get around this I have created a symbolic link from this directory to /srv/docker and so this post will be referring to the symbolic link rather than the path with the disk UUID.
root@OMV-Server:~# ls -la /srv total 28 drwxr-xr-x 6 root root 4096 Apr 11 19:38 . drwxr-xr-x 23 root root 4096 Apr 11 19:12 .. drwxr-sr-x 3 root users 4096 Apr 11 16:50 dev-disk-by-uuid-f53202ce-3eb5-4a88-bbdd-7e3490572f17 lrwxrwxrwx 1 root root 65 Apr 11 19:38 docker -> /srv/dev-disk-by-uuid-f53202ce-3eb5-4a88-bbdd-7e3490572f17/docker
One of Techno Dad Life’s popular guides is on Nextcloud and he utilizes a written guide on the OMV forums that details a Docker compose here. One advantage I can see having a volume outside of the /var/lib/docker/volumes directory is that some SoC PC’s have small system partitions, so by placing them on a secondary drive it gives them room for the volumes to expand.
Specifiying Docker Data Root With ‘daemon.json’ File
The way I approach this on OMV is by relocating the Docker root directory using the data-root parameter in the daemon.json file at /etc/docker. Would anyone object against moving the data-root outside of the default location or using custom volume mounts like my directory tree above? If Docker hasn’t been installed on the system before then the /etc/docker/daemon.json obviously won’t exist. Without this file, if I wanted Docker to acknowledge the data-root change on a fresh install would it be as simple as creating the file and path /etc/docker/daemon.json with the parameter inside and installing Docker?
When mapping host directories to those in the container, some image developers list all of the volumes you can match from the container to the host on their GitHub, Docker Hub and even their own website documentation. If I wasn’t sure what container mounts are available in an image, is there a way I can find exactly what volumes I can mount from within the container to the host? Obviously I could enter the interactive way and list the contents with the ls command for example, but I was wondering if there is another way of doing it?
As stupid as it sounds, with my bind mount in place for the NGINX reverse proxy, how to do I pull the nginx.conf file from the container? I started the NGINX image with
docker run -d -p 10080:80 --name nginx_reverse_proxy --restart unless-stopped
and the container stayed up and worked. It creates a volume mount that is stored /srv/docker/data (in my case I changed the data-root path so it will no longer be stored at /var/lib/docker). The problem with this is, to be able to edit files/directories I have to open a shell in the running container and use a CLI editor like Vim or Nano as I don’t have direct access to the volume and its files/directories on the host.
On the other hand, creating a bind mount allows me to access the file/directory through the host in various ways as previously mentioned. Unfortunately as soon as I add the volume parameter to the docker run command, the container exits with an error as though it’s waiting for me to supply the nginx.conf file. I was hoping that I could mount the container paths to those on the host and access the files inside the container from the host but that doesn’t seem to be the case. I could go onto the internet and grab a nginx.conf file but that seems counterintuitive. Is there any way for the container to dump its contents into the bind mount volume so that I can access them whether the container is running or not?
For quick edits could I use interactive mode to enter the shell within the container like this?
docker container run -it nginx_reverse_proxy /bin/bash
Automating Docker with ‘docker-compose’ Files
Once I have my NGINX configuration file the way I want it and all of the parameters set for the container would it then be a good idea to implement everything into a docker-compose file and keeping the files such nginx.conf in my persistent volumes safe somewhere on my host machine to re-call on?
As I write scripts often using heredocs (cat/echo <<EOF…EOF) to copy bulks of text into files and start services, I was thinking of using shell scripts to create my directories, create the NGINX conf file, write-out the docker-compose file and then run docker-compose up to start them all up together e.g.
docker-compose -f /srv/docker/docker-compose_files/nginx_reverse_proxy-docker-compose.yml up
If I wanted to stop a specific container within the Docker compose for example stopping letsencrypt (Certbot) without stopping the NGINX reverse proxy container would I do it with the docker-compose stop command?
Example docker-compose File
version: "3.9" services: letsencrypt: image: certbot/certbot:latest volumes: - "/srv/docker/appdata/certbot/conf:/etc/letsencrypt" - "/srv/docker/appdata/certbot/www:/var/www/certbot" restart: unless-stopped networks: - nginx_reverse_proxy-network nginx_reverse_proxy: depends_on: - letsencrypt image: nginx:latest volumes: - "/srv/docker/appdata/nginx_reverse_proxy/nginx/nginx.conf:/etc/nginx/nginx.conf:ro" - "/srv/docker/appdata/nginx_reverse_proxy/nginx/html:/usr/share/nginx/html" - "/srv/docker/appdata/nginx_reverse_proxy/certbot/conf:/etc/letsencrypt" - "/srv/docker/appdata/nginx_reverse_proxy/certbot/www:/var/www/certbot" ports: - "10080:80" restart: unless-stopped networks: - nginx_reverse_proxy-network networks: nginx_reverse_proxy-network: driver: bridge
docker-compose -f /srv/docker/docker-compose_files/nginx reverse proxy-docker-compose.yml stop letsencrypt
Backing and Restoring Docker and It’s Persistent Data on a Live System
My final question is, how do I backup and restore Docker volumes? I have had past success archiving a running WordPress site’s web-root directory and dumped MySQL database using Tarball, Gunzip and then using Rclone to upload the archive to a cloud server. Could I achieve the same results by archiving the persistent data volumes located at
/srv/docker/appdata? To restore them I would assume extracting the data back to the persistent volume location and running the original docker-compose would be enough? Of course this would be on a live system without having to disable any of the services unless it is being restored.