I will dig deeper, but it appears that there has to be some client-side co-operation, since SSH does not natively include some kind of hostname information like HTTP does with the Host field in its headers. In particular, it seems like every client’s .ssh/config should be configured to account for the proxy in the middle, or something to that tune. If the intended audience of these containers was fairly technical, this would be perfectly acceptable. Sadly, typical users of your average shared web hosting service are not that technical.
As I said, I will dig deeper, and I’m open to be proven wrong
Hey @quinncom, the solution I found was to have a dedicated SFTP container mounting the htdocs volumes from the relevant containers, and restricting the users inside their respective mount points. For the hostname issue, I found a non-solution: in practice you can log in to the SFTP box using any of the domains that are mapped to its IP address, because SSH doesn’t send the hostname as part of the connection request – which I still think is a bit silly, but I guess it’s not necessary either for the protocol to work.
After setting this up, I sort-of left it going for the few users that requested it at the time – they still use it – but haven’t developed it further because life had other plans If I were to pick this up again, I would probably look into FTPS with Let’s Encrypt (not quite easily available yet at the time I was doing this). Maybe that carries some form of hostname in the connection request? HTTPS sort-of does and doesn’t, so maybe FTPS does and doesn’t too?
Hey, It’s cool to hear you got the project working!
Each of our clients will deploy to their website in a different way: some by SFTP, some via git push post-receive hooks, etc. I would only like to give SSH/SFTP access only to clients who really need it. The right solution seems to be to use the sidecar pattern, attaching a container running an appropriate service they can connect to. This means running multiple containers with SSH. Instead of proxying to these containers, I considered setting up a subdomain (ssh.client.tld) pointing directly to the IP address of the node with their containers, then, assign them a unique port for SSH (22001, 22002, etc) that routes to their SSH sidecar container.
Just want to leave this here: SSLH is a “Applicative Protocol Multiplexer (e.g. share SSH and HTTPS on the same port)”. I haven’t use it, but it seems it could solve the problem of routing SSH/SFTP connections from the ingress controller to the appropriate pod by taking all connections on the same port,