One of the functions I need to find a container for is a reverse proxy with websocket support.
Currently it is done with apache. Not to 100% succes but fairly OK.
I was looking for a docker container with all I need and thought about using the nginx base. But I have trouble finding a container including both nginx and the websocket . Nginx doc.info tell to install it seperately (mnp & nodejs & …), But that kinda kills the idea of being able to easily upgrade/update new containers versions.
Any advise on how to do this or what container to use?
Thx,
C.
BTW: I’m new to the docker world but learning fast
creating a reverse proxy within a docker-container can be done using Apache, Nginx or Traefik. There are Docker-images available for all these options.
Creating a service which provides a websocket-service is a different thing and therefore should be done in a different container. NodeJS might be a good base for this task.
Different ideas are (which came to my mind - there are surely other ideas):
using one NginX-/Apache-container for the static data and forwarding the websocket-requests to the NodeJS-based container
using a Traefik-container forwarding the websocket-requests to the NodeJS-based container and all other requests to the NginX-/Apache-container
using the NodeJS-based container to serve the websocket-service and the static data. But I would opt for one of the other ideas.
Maybe I should explain what needs to be achieved, because I do not understand the statement you make about 2 different things.
We have 1 external IP. But we have many DNS records resolving to that IP. so all 80/443 requests coming on that IP must be dispatched based on the name. So name1.my.dom:80 and name2.my.dom:80 come in on 123.123.321.321:80, and after passinf through some firewalls all these reuests get forwarded to this reverse proxy.
Then for name1, the reverse proxy assures it get to the proper LAN server at IP1.lan:80 and name2 requests to IP2.lan:80, etc.
The connection that needs to be maintained with name1/IP1 also inlcudes websocket data. So correct translation must also happen there.
I’ll have a go at it. So I’ll just use the default official ubuntu/ngnix, and add some location conf.
Link: this is where is read about the nodejs/… need for the websocket support.
But maybe (likely) it’s just me mixing things up.
I’m not a proxy expert. I’m happy we got it to work on apache on the current/old server
But I’m migrating everything to a low energy server (200W → 30W). So I take the oportunity to migrate from a milti os vm setup to a docker single ubuntu setup
I’ll get down to deploying the container and trying out (new to me) nginx. I’ll be back (most likely)
It is like I thought: the tutorial shows how to configure nginx as a reverse proxy in front of an example nodejs application that requires websocket communication.
have the nginx running, reverse proxy part working well for http, with the server name based dispatch.
But the websocket to these backend servers does not seem to work
this is the conf file
server {
listen 80;
server_name my.dns.name;
#redirect to https .. once it's working
#return 301 https://my.dns.name$request_uri;
location / {
proxy_pass http://backend.server1.lan.ip;
#websocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade"; }
}
Next is to add ssl support.
backend servers support ssl, but have no certificates set up.
I only need to secure with cert the connection between front and reverse proxy.
Am I right that what I need to do:
find some way (a docker repo I guess) to generate/update the certificates on this docker server, so the nginx can use then
configure the 443 listen part in the server section of nginx config, referring to those cert.
include in that 443 server conf the same 3 websocket support rules as for http/80
Right?
If the frontend is a web application that is rendered in the browser, and it should be able to access the backend through the reverse proxy, I strongly suggest to use https endpoints for everything accessible from the internet - this includes whatever serves the frontend application as well.
to 1) Depending on whether you use a domain that is reachable from the internet, you could use letsencrypt, which issues certificates which are issued by a CA that is known and trusted by most devices, OS’ses and browsers.
Otherwise, you will have to create self-signed certificates (google should yield plenty of results on that topic) and need to import the CA certificate (or the created server certifcat) in the trust store for the browser that needs to access it.
to 2 and 3) yep. Make sure your http server on port 80 is configured to redirect traffic to the https port, to avoid unnecessary http plain-text exposure of your communication.
Fixed Cert’s indeed with letsencrypt aka certbot,
http->https etc all nicely working, including websocket in all combinations.
Lats thing to do is assure certificate renewal, but seems there is a letsencrypt docker for that as well
Thanks for your support in getting there so smoothly.
C.
All three solutions provide a reverse proxy (the first two based on nginx) and can take care of certificate renewal. I personally prefer Traefik, which allows configuring reverse proxy rules using labels on containers.
Let me digest for a moment. I’m extatic about having it all up and running (albleit in several containers) … and now you suggest to move it all to something new, which integrates all of that.
Don’t get me wrong: I’m glad you hinted me there. If I get it right
using the first two, since nginx based, I could probably copy over the conf files (3rd is win based, so not an option at present)
any of those gives a management layer/GUI on top of nginx to more easily manage the reverse proxy
they include certbot/renewal
Since my current cert’s are valid for a few months, it buys me time to try and migrate. I guess that’s the beauty of docker as well. I can just stop the nginx container and start the (for example) npm container. If it doesn’t work, then simply turn the nginx back on. All of that with some click in docker/portainer GUI. I start to see why docker was the best hint a firend of mine gave me when starting this migration!
Have a nice day, and again thanks for assisting. It’s much apreciated.
Nearly 4pm on a Saturday: time to start the weekend
C.
Honestly doing it manually usually has a good effect on learning new stuff, and as you figured, allows going back to a working solution in case migrating to one of the other solution doesn’t work right away.
I just wanted to give you options, so you can decide for yourself which path you walk down.
The other options can be overwhelming with the set of options they provide.
Note: Traefik images exist for linux and windows. It requires less resources than the nginx proxy manager, but might be the least beginner-friendly solution of them all, as it can’t be configured from the ui - the ui is just a dashboard. But the flexibility is worth it!
Migrated to NPM … in a glimpse. So fortune you got me there
Indeed 1: NPM not the most advanced, but super easy if it suits your needs, and it ticks 95% of my boxes for the time being.
Indeed 2: getting into the basic manually first is indeed a good start , as then at least you understand what the automation layer is doing/bringing (or not)
There is a further step you could take to make your setup more flexibel: use a wildcard domain and wildcard certificates.
If the domain you use is manged and owned by you and the letencrypt-client inside npm supports dns-challenge for your DNS provider, you could leverage a wildcard domain with a wildcard letsencrypt certificate. The beauty of a wildcard domain is, that it acts like a catch all for sudomains of the wildcard domain that are not configured. When you combine this with a wildcard letsencrypt certificate, then adding a proxy rule that listens on a specific domain and forward traffic to a specific container is the only thing you need to do, to expose another container over https to the internet under a new subdomain (without having to register in your dns server)