SSH to container using host SSH service

Hi
I have been playing with docker for some time and worked with various tools that also use docker containers behind the scenes.
One feature that some of these services have is that I can make an SSH connection fx 1234@myhost.serviceprovider.tld and authenicate using SSH keys. But instead of ending up as user 1234 on myhost.serviceprovider.tld, I end up as a user called appuser inside a container.
I have not been able to figure out how they do this, but must be something where they detect 1234 as belonging to a container and then reroute the connection to the correct container on the corresponding container host.
Knowing a little on how SSH works and that it is not easily possible to detect which hostname we connect to, there must have been done something to listen for TCP connections on port 22, split apart the TCP headers to find out that it is SSH and also split out the parts containing the user and/or hostname
Does anyone know how such feature is implemented either on the host directly or perhaps as a docker container and at best an existing container image that handles these things

Maybe not directly a docker question, but what you could do, is maybe somehow use ssh forcecommand, in sshd_config

Match User your-user
    ForceCommand ssh -t locahost:docker-ssh-container-port

did i understand you correctly?

ForceCommand mentioned by @terpz could be the part of the solution . You don’t even need SSH inside the container.

Match User test
  ForceCommand docker exec -it test bash

The above setting in the SSHD config works but it means you could not execute a command like:

ssh test@serverhostname whoami

So you can do something like this:

Match User test
  ForceCommand docker exec -it test ${SSH_ORIGINAL_COMMAND:-bash}

Then use -t for with ssh command

ssh -t test@serverhostname whoami

or use it without parameter to start bash in the container.

One downside of this solution is that you need to restart the SSH daemon every time you add a new match but I may be wrong. On the other hand you can use SSH keys to authenticate the users instead of username and password. The authorized_keys file supports parameters like command

command="docker exec -it test ${SSH_ORIGINAL_COMMAND:-bash}" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMP9pnQ28bLk/aEDun4RurVOkm7HsA9obPreZbtmr3F1 OPTIONALCOMMENTHERE_LIKE_USERNAME

Then the client:

ssh -t -i ~/.ssh/containertest test@serverhostname

Of course you can configure the SSH key in the ssh client config file (~/.ssh/config)

Both ways require that you set the proper user groups and docker socket groups so the user can run docker. If you don’t want to allow the user to run every docker command for safety’s sake, you ca create one shell script for each command and, don’t allow to write it, but allow to execute it by the user.

SCP will not work this way of course but.

You could use a more complex script instead of this short command to avoid the need of -t or use different ssh keys for commands that require TTY.

You could use a more complex script instead of this short command to avoid the need of -t

What is the idea for that script that you were talking about, could you sketch up your idea a bit more detailed, please?

I am using
command="docker exec -it aws ${SSH_ORIGINAL_COMMAND:-bash}" in my authorized_keys file.
I would like to have a solution that will work with both:

ssh -t test@serverhostname /bin/ls

and

ssh test@serverhostname /bin/ls

The idea is using a “command.sh” in the “command” parameter and check the command before running docker exec and don’t use -it for the exec subcommand when it is not necessary. The simplest check could be just checking if SSH_ORIGINAL_COMMAND is empty, because -it is required to keep bash alive, which is the default value in my previous example. I show you an example command.sh

if [[ "$SSH_ORIGINAL_COMMAND" == "" ]]; then
  docker exec -it test bash
else
  docker exec test $SSH_ORIGINAL_COMMAND
fi

I didn’t test it. This is not a perfect solution, because other programs might require interactive terminal as well and you could not use ssh test@serverhostname bash which is not really common anyway, but I don’t think there is a perfect solution.

Did anyone come up with a more elegant solution? Specifically something that is interactive and also prevents a user from just typing “exit” to get out to the host machine?

Have you tried my suggestion? Since my solution changes the SSH shell, even if the user uses the exit command, the SSH connection will terminate. Did you have a different experience?

I don’t think there is a “more elegant” solution. Containers are not desined for SSH users, but the actual “best” solution depends on your use case. If you don’t need the whole container, only a shared folder, you can run SSH in an other container and mount the same folder into the SSH container as you mount to the application container. Or if you already have a container which runs multiple processes (for example using s6-init or supervisor), you can add SSH to it. If you don’t like any of these solutions, you can consider using LXC containers instead of Docker containers which are almost like VMs since they have Systemd too.