Random Proxy Issues

I’m at my wits end here… I need some help. :frowning:

So I’m trying to run a simple setup. Apache (or Nginx) in front of a node server and angular app.

Everything works great… UNTIL… if I refresh my browser a few times… I will eventually get (like 1 out of 5 or 6 refreshes) a bunch of pending requests… like apache cant complete the proxy. Once this happens… the site will not load at all for a bit… (perhaps until a timeout kills all the pending requests).

I’ve ran some circles with ChatGPT and wireshark… tried using 127.0.0.1 instead of 0.0.0.0… I dunno. It all is just borked.

If anyone has thoughts… or ran into this before, and has advice. It would be great.

I run app on something like: client1.myapp.local

  • node running at localhost:3333
  • angular dev server running at ➜ Local: http://localhost:4200/app/ ➜ Network: http://10.0.0.38:4200/app/ ➜ Network: http://172.18.160.1:4200/app/ ➜ Network: http://172.25.240.1:4200/app/
    my apache service:
  apache:
    container_name: apache
    image: httpd:2.4.55
    volumes:
      ## copy app files
      - ../../www:/usr/local/apache2/htdocs
      - ../../dist/apps/app/browser:/usr/local/apache2/htdocs/app
      ## copy our local proxies/settings
      - ./apache-httpd/httpd.conf:/usr/local/apache2/conf/httpd.conf
      - ./apache-httpd/virtual-host.conf:/usr/local/apache2/conf/virtual-host.conf
      - ./apache-httpd/shared.conf:/usr/local/apache2/conf/shared.conf
    ports:
      - 80:80

i have a similar one for nginx… tho they both have the issue.

Heres the apache proxy:

# shared.conf
Define local_node       "http://host.docker.internal:3333"
Define local_app         "http://host.docker.internal:4200"

# virtual-host.conf
<VirtualHost *:80>

  ServerName localhost
  DocumentRoot "/usr/local/apache2/htdocs"
  DirectoryIndex index.html

  <Location /api>
      Order allow,deny
      Allow from all
      ProxyPass ${local_node} disablereuse=on
      ProxyPassReverse ${local_node}
  </Location>

  <Location /app>
      Order allow,deny
      Allow from all
      ProxyPass ${local_app}/app disablereuse=on
      ProxyPassReverse ${local_app}/app
  </Location>

</VirtualHost>

Since you already use a containerized server, why don’t you use a container network to forward the reverse proxy traffic to the target container? Looks like the traffic is jumping through unnecessary hoops.

Thanks for the quick response. :slight_smile: I have one container (Apache/httpd)… but the dev servers just run in a terminal on node. Not sure how it could get any simpler. Not sure I really want more containers. How would a “container network” look?

This is all for local development. I’m not trying to use this for deployment in anyway.

So it is a container, and not run on the host.

So the target application is not running as a container? I must admit the setup is confusing so far. If you draw a schema of your current setup with the communication connection for yourself, you should immediately see what I mean.

How do you deploy your containers? Depending on the way it will look differently.

That’s basically our local development setup.

Not entirely sure about the deploy entirely.

  • The server and client code exist in different GitLab repos.
  • On a branch commit, a pipeline runs/builds/test the code in a container we push to the GitLab registry.
  • If it passes, its passed to a AWS ECS cluster… which I guess are docker containers.

Your diagram lacks some details about the process placement:: the host os and the Docker Desktop utility vm.
What you call Terminal must be in the host context, so that binding them to localhost can’t be right.

Every published container port will result in a forward port from the windows host to the docker desktop utility vm (which runs in a natted network) and a forwarded port from inside the utility vm to the container port (which also runs in a natted network). From the container you want to reverse proxy back to the host, so the target applications on the host must be bound to 0.0.0.0 instead.

Just out of curiosity: why not run the reverse proxy on the host as well, or use containers for everything (this is the scenario where container networks make sense)?

The ECS setup later will be completely different, as it will high likely use an ALB with listener rules that forward traffic to target groups that each ECS Service registers itself to.

I think this conversation is confusing me more than helping me.

  1. We turn on our MacBook or Windows machine.
  2. Open a Terminal and run $ npm start api and the node Api application starts up on http://localhost:3333
  3. Open a Terminal and run $ npm start app and the angular App application starts up on http://localhost:4200

Both our api and app are now accessible on our local machines from those urls/ports. However we want to develop on http://client1.myapp.local (for a subdomain, cookies, routing, etc). To do so locally… we have traditinally run apache in front of it when developing locally.

The Apache proxies use:
Define local_node “http:// host.docker.internal:3333”
Define local_app “http:// host.docker.internal:4200”
which I was told points to your localhost:3333 and 4200 respectfully.

  1. Open docker desktop. Run the $ npm start apache which starts the apache docker container.
  2. We now can go to http:// client1.myapp.local and it will all load/proxy correctly. (except for the issue which was the intent of the original posting)…
  3. If you refresh the browser 5-6-7 times… something gets hosed and thrown into a pending state… locking up the apache server.

As a side note… angular currently has two dev servers. One that is Webpack based and a newer one - Vite. Switching to the older deprecated Webpack based version seemed to make the issue go away… or at least minimize it. (tho I didnt get a chance to test that much)

FYI: I had to put some spaces in my urls because the forum only is allowing two links for some odd reason.

The reason is simple. Spammers send multiple links so new users can’t post more than two links. In your case, you don’t even need links just URLs which should be shared in code blocks or just as an inline code. You can find the formatting guide here: How to format your forum posts

Now back to your issue.

It is because what you are trying to do is really unusual and if you know how Docker and Docker Desktop works you will understand it why. So I @meyay was probably lost in your configuration and forgot that in indeed you can have an aplication listening on localhost and connect to it from Docker Desktop. It wouldn’t be true in case of Docker Engine which is running inside Docker Desktop too.

I have no idea what that random connection is caused by, but it is true, that the traffic is going through multiple layers and the problem could be anywhere or a mix of multiple steps not working well together.

This is what is happening (not 100% accurate):

  • Request goes to your host machine to the forwarded port 80
  • Port 80 from the host is forwarded to the virtual machine’s port 80
  • Port 80 from the virtual machine is forwarded to the containerd container in which the docker daemon is running
  • Poer 80 is forwarded to the Apache container port
  • Apache sends the request to the to host.docker.internal, which is not directly your localhost on Windows. It couldn’t be.
  • So the traffic goes through the container networks again to a service that probably forwards the request to a service running on your host machine
  • That service forwards the request to your localhost on Windows
  • The application running on localhost on Windows sends the response back to Apache HTTPD so it goes through the whole network path again (or maybe routed differently, I’m not sure)
  • Apache gets the response and sends it back to you.
  • Oh wait… but Apache is in the vitual machine running in a Docker container which is running in a containerd container so the response goes through the container networks and the virtual machine right back to you.

Let’s add the fact that you use Windows and probably the WSL backend. WSL distributions are containers as well. So WSL2 VM, WSL2 distro container, containerd, Docker and so on.

Now how could tell where tha traffic is lost?

If you run everything on the host, there is no vitual machine, no multiple container networks and the traffic goes immediately from you to Apache, from Apache to the apps, from the pps to Apache and from Apache back to you.

If you run everything in containers, the traffic goes through the VM and container networks once, then from Apache to the apps, from the apps to Apache and from apache through the container networks and the VM back to you.

So at least no problem between Apache and the applications.

So you don’t really use anything that containers are for except that you can use a Linux based Apache httpd and install relatively easily. You could just run Apache httpd on Windows too or run the node apps in containers so you can isolate those too.

So unfortunately it is hard to tell what is the problem with the network and where sinc we have never tried a setup like you and the network communication is complex.

Maybe it’s a timeout somewhere, maybe it is the size of the packages (MTU setting), maybe it is something with the routing if it is not always the same, but I don’t know. And we don’t use Windows usually so if it has anything to do with Windows specifically, I know even less. I know some things about the networking of Docker Desktop but not perfectly. So you need help with almost the only thing (in your specific case) we are not sure about either.

I have the exact same issue, with the same behaviour you described. I guess the issue is with WSL or Docker Desktop for Windows.

Not sure why you got the response like your setup is “really unusual”. It’s an extremely simple configuration. “Put everything in containers” is a non-solution. It’s just a workaround. There is no good reason why a reverse proxy in a container should randomly break.