Combining host network with docker-based networks?

Hi all,

I’m quite new to Docker but am also looking after a live system with multiple Docker containers that was created by my predecessor who also left his job without leaving any documentation behind. I’m now supposed to look after the system, despite having to “learn as I go”. So my setup is complex (for me) but my question will be a bit elementary – apologies in advance!

On an Ubuntu 20.04 system, we are running Docker v19.03.11 . There are numerous (almost 20) Docker containers running, none of which I wrote but I have to maintain… They all communicate with each other – I guess as a “swarm”? One of these containers run nginx, which forwards requests to the other containers.

Now, I would like to add a new web server, but running on the host machine. So, I would like the nginx to be able to forward to both the other existing Docker containers and Apache2 running on the host machine. Is this possible?

For just the latter, various Google searches have said that I should use --network="host". For the former, well, it’s already working. But I haven’t found any information yet whether both simultaneously is possible. Since it’s a live system with 20 containers that I’m gradually learning about, I thought I’d ask before I get to the point where things get broken. (Of course, at some point, I will need to figure out how those 20 containers work…but I can figure that out later since setting up Apache2 on the host is (currently) a higher priority.)

The Apache2-based system cannot (as far as I can tell) become a Docker container. I can’t make it into a docker-compose.yml file since there is some GUI-based interaction that’s required by the user in order to set it up.

Anyway, is the network configuration that I mentioned above possible? Could I set it up so that nginx in a Docker container forward abc and def to other Docker containers, but ghi goes to the host machine on Apache running on some other port like port 1234? (Since port 80 traffic will go from the host to the nginx Docker container.)

(Perhaps if I could start from the beginning and redo it all, nginx should run on the host and ghi gets sent to localhost, and abc and def gets forwarded to docker containers. But I worry doing so will break things.)

Any help would be appreciated!

Thank you!

Ray

It is possible, but it is not likely that your predecessor used only one machine for running a swarm cluster in production which is for multiple machines. I guess it is just some Docker Compose projects.

Yes, but I would not do that. That NginX proxy is probably a container that dynamically configures itself.

https://hub.docker.com/r/nginxproxy/nginx-proxy

You could add some custom configuration but it is better to leave that container untouched, do its job what it was created for and find an alternative.

That is not for your case. That option is for containers to use the host network instead using bridged local network interfaces. Since you stated that you don’t want to run the new server in containers, it will not help you.

What kind of GUI interaction do you mean? You can run a container and mount all the required files into that container if you want to run it quickly so you don’t have to learn how to build your own image unless the Apache server requires some extensions. If that GUI interaction is only for generating files for the server, you can still use any tool and just mount the generated files into the container.

If you want to learn the basics (I guess you don’t have time before the deadline) you can try this:

https://container.training/intro-selfpaced.yml.html#1

If you can add a new IP address the server that would be the easiest way to install an Apache server directly on the host. So you could configure it to listen on the second IP address and change the port foward of the NginX proxy container. Of course you still have to find out how that container was started in order to change the port. In case of containers, your server can listen on all the available ports in the container but you can “tell” the Docker to forward port 80 and/or port 443 from a specific IP address of the host to the container.

If you can’t add a new IP address but you have an other server, you could configure a proxy on that servr manually and forward all traffic to the other server’s port 8888 on which the new Apache server could run. I know, it sounds bad, but it would be temporary until you can find out how that 20 containers are configured.

I would consider your original plan only as the third option or maybe even fourth depending on how close the deadline is. If you try to change the configuration of the NginX proxy running in the container, you can have a bigger problem if you don’t understand containers yet. You could also try to collect some information, details about your environment, share with us so we can help you figure out how you can run the new server in a container or change the configuration of the proxy, but that would take time as well.

If it is very urgent, you can also try to hire someone just for creating a container from your new app and maybe figure out and explain how your current configuration works. Of course that option would cost money and someone new should see your private projects or at least a part of those.

1 Like

Hi!

So sorry for the late reply; I had something else that drew my attention away from this problem briefly. Thank you very much for your message – it did help clarify a few things for me!

I see! I guess I completely misunderstood what swarm means…I should read those Docker docs again. And yes, “some Docker Compose projects” sounds fairly accurate.

Got it – I can’t tell how that container is configured, but yes – I’m also quite hesitant to touch it in a way that would attract the attention of our users. A nginx could bring down serveral sites that share the IP address.

Thank you! As you may have guessed, I’m having difficulty digesting all the information out there. I have a computing background, but I’m more familiar with virtual machines. I do want to learn about Docker, but building up containers whilst following tutorials is something I’d prefer over taking over someone’s undocumented work.

Sorry for being vague – actually, it is just WordPress. I know there a WordPress container, but I couldn’t figure out how one keeps a WordPress container persistent since files are stored in multiple locations. Does that mean every possible storage location has to become a volume? I’m tempted to just make one volume and mount that as “/”, but I guess that isn’t what Docker is about.

Indeed, I’ve seen WordPress Docker containers (with corresponding MySQL containers, which presumably run together). I guess I just need to figure it out how to do it. But it seems clear from your reply that having WordPress and MySQL running directly on the host is a bad idea in our situation.

Thank you for all of your suggestions! It was most helpful!

It is part of my job to “figure this out”, but it is a bit overwhelming to do it on a “live” system that’s connected on the web. If it was a container that just started up, did some processing for a few hours, and then shut down (which had been my impression of what Docker is used for), then I could figure it out on my own time. I’m a bit surprised it could be run as a web service with multiple directories in use all over the place. Also, perceived downtime by users and creating a mess that would require me even more time to figure out is what I’m most worried about.

Given what you’ve said, you’ve “empowered” me to ask the powers-that-be to see if I can get another IP address. I didn’t think of asking since I would figured someone would just say I should add on to the existing host computer. But, you’ve given me enough reason to ask and make it appear that I actually know what I’m talking about…

I’ll need to learn Docker some day, but hopefully on my own time and not in response to something that I broke…

Thank you very much for all your help! I really appreciate not just your reply, but how detailed you were in your reply. You most certainly answered any lingering questions I had. Thank you!

Ray

That is one of the main concepts of containers. You can create, remove and recreate containers any time. When you need a persistence stoage, you create a volume or use “bind mount” to mount a folder from the host. Which is very similar to the default “local volumes”.

You can’t do that.

You can do that too. A container is just a process on the host as any. It just can’t see everything, only that you allow it to see.

1 Like

Ah! Thank you for this! I was actually considering this.

I guess my problem is that I have a “virtual-machine mindset” and I need to shake that away when dealing with Docker containers.

I see – I do realise there is a lot of on-line documentation out there. I will need to go through them later since those 20 containers will have to be maintained.

Thank you for all your help!

Ray