I am new to docker and am currently running it on a Proxmox node.
I am currently creating a container for each docker file and so far this has been working great.
I am curious to get opinions on the best practice for docker usage and any pros and cons to each way.
Run 1 instance of docker and have all docker images run on this instance.
Only require 1 IP Address for the container and then port forward to access the images.
If there was a system error you could loose everything.
Run each docker image in it’s own instance of docker.
Each instance would have its own IP Address.
If there were any errors you would only loose that instance of docker.
Using multiple IP Addresses and having to remember what is connected to each.
More resource usage.
I don’t think anyone would run multiple docker daemon on the same machine normally.
If there is a system error, you could loose everything anyway. If you mean a Docker daemon error, what error do you mean? Docker storage driver error? In any other case I don’t think you would loose anyting. I would rather create a redundant filesystem and have a regular backup than run multiple Docker daemon.
First of all, why would ieach instance have its own IP address? Docker can use any of your networks. What IP address do you mean?
Let’s say you are right.
Why would that be a pro? You also mentioned it as a con, because you have to remember which service running on which IP, but what is the advantage then? Unless you want to test some special cases, one IP address is should be enough.
If you have problem with running multiple service on the same IP using the same port, then you need a reverse proxy. Search for it on the forum or on the net and you can find planty of discussions about it
I say don’t run multiple Docker daemons, unless you want to test something which requires to do it. For example there were (maybe there are) some operating systems based on Docker that ran one daemon to core services and an other for everything else created by the user. That way you could configure the operating system to act as an Alpine linux or a Debian Linux when you logged in. However that is probably not something that you would do when you are new to Docker
One more suggestion. If you don’t want to loose your data in case of a Docker daemon error, don’t store anything directly on the containers’ filesystem. Use volumes and bind mounts.
What @rimelek said
I assume by “have all docker images run”, you mean “have containers run created from images”. An Image is just a packaging and delivery artifact - it is the blue print for a container at rest.
You can also run a single docker engine and use macvlan so the containers can be bridged into your lan network - with
docker run and
docker-compose you can even assign a static ipv4 address to a container. A forum/internet search should yield plenty of examples how it’s used.
Another thing that makes life easier is to use docker compose files for the creation of containers that act as a service. If the compose file is put under version controll, you can track/revert changes and use it to re-create everything on another host - the only thing you would need from the old host is a backup of the volumes and restore of the volumes on the new host. Again the forum search should yield plenty of examples on how it’s done.
Thank you both for your input, I am hoping to clarify I am running docker in multiple Proxmox LXC Containers \ Virtual Machines not on a bare metal installation.
The scenarios I was thinking about are:
[PROXMOX1] - [LXC1 Container] - Docker installed with NGINX container.
[PROXMOX1] - [LXC2 Container] - Docker installed with PLEX container.
[PROXMOX1] - [LXC1 Container] - Docker installed with NGINX and PLEX containers installed.
[PROXMOX1] - Docker installed with NGINX and PLEX containers installed.
I hope the above does makes sense (as it does in my head).
I am extremely tired today, so it could be my fault, it it is not clear to me. I understand that you run
which one? Or is it a theoretical question before you decide which one you want to use?
And what is your goal with separating Nginx and Plex. If you run the LXC VMs or containers on the same phisical machine, than the benefit is not much. Even if you run those on different physical machines, it is more machine to maintain manually, unless it is a cluster with Docker Swarm or Kubernetes. So I would not do it, but you may have a good reason, I just don’t understand yet.
Like @rimelek wrote,
#1 overcomplicates things and requires more resources.
#3 only makes sense if you are able to use hardware transcoding, if this is not the case put docker in a vm. Personaly I prefer to keep my Proxmox host as unmodified as possible.
#2 I wouldn’t use LXC for that.