Recommended approach for accessing multiple containers via http(s)

Hello,

I’m new into the docker technology. I’m planning to create a serwer for monitoring / documenting network with applications like netbox, zabbix, grafana, etc. Each application will be executet as a separate container and each application should be accessible via subdomain, eg. https://zabbix.mydomain.com, https://grafana.mydomain.com, etc.

I would like to install the nginx proxy and access them via this proxy. My question is, should I create another container for proxy (with nginx) or should I just install nginx inside of the base operating system (the system that hosts the containers).

Best regards,
Aleksander

Both would work, but since you already work with Docker, creating another container for the proxy seems like the better choice, both for the fluidity, and the Docker integration

1 Like

Check nginx-proxy and companion (simple Docker) and Traefik (also works with Docker Swarm), both support automatic configuration by env or labels.

1 Like

Not to mention that container-based referse proxies are built for other containers and the configuration can be automated instead of manually setting everything and making mistakes.

Thank you all for answers. One related question. Currently on my bare metal I have the debian installation along with the proxmox for virtual machines. I was planning to create a virtual machine with a linux distro as the base for my containers, but maybe it is too many “layers”?. Maybe better approach is to just install docker inside my debian distribution next to the proxmox. What do you think?

I wouldn’t call it a

because it can be confused with the “base image”, but if this is not what you meant and you are running virtual machines on the physical host, it is sometimes better running containers in a virtual machine depending on the virtuali machine manager and the container engine as sometimes the network can be broken for one by the other. For example when you try LXD virtual machines and containers on the same machine, that’s okay, and probably KVM virtual machines and LXD containers (Some software do that already), but I would not run Docker next to LXD VMs or containers. You will have different kind of network issues. There are workarounds, but better running Docker either in a virtual machine or in an LXD container, where the latter of course also requires extra steps as it is a kind of nested containerization, but running Docker in a VM should be easy. Assuming of course that the operating system and the kernel is supported. Of course if you need for example GPU in Docker containers, then making it available in a virtual machine or LXD container and in a Docker container running in either a VM or a container is another challenge, but normally I would run Docker containers in a virtual machine. Resources would be limited, but it is actually a good thing as your containers will not steal the resources from other virtual machines even if you have no cpu and memory limit for the containers.

3 Likes

Depending on the payload use LXC Containers or KVM VMs. Like @rimelek already wrote: if you need GPU access then probably LXC containers might be easier to manage, unless you have a dedicated GPU which can be assigned to a vm.

Proxmox is a Hypervisor, and I highly recommend keeping it vanilla and cleanly separated from the payload you want to run. I have three Proxmox hosts in my home lab, with many KVM VMs, and a few LXC containers. Though, I don’t use a GPU in any of them.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.