Docker Community Forums

Share and learn in the Docker community.

Directly SSH a container hosted on an EC2 instance

docker
aws

(Guizello) #1

Hello everyone,

I’ve built a stack of some docker containers (with docker-compose) on an EC2 Ubuntu 16.04 LTS Instance.

These are sshd running containers (this is the goal), they are all part of an isolated network and everything’s works fine: each container can communicate with another through his name thanks to docker DNS resolution inside the dedicated created network.

What i would like to do now is to access directly one of my container from the outside world like i do for my EC2 instance through the public IP.

What would you suggest to do that ?
Am i able to add a public IP to a container ?
Do i have to add a gateway to the container ?
Something else ?

Thanks very much for your help !
guiz


(Sam) #2

do the containers talk across ec2 instances on the private network?

I don’t think u can expose the private network with its addresses to the outside world.
you will have to talk to the ec2 instance… just like you can’t talk to the actual ec2 instance ip address, only its public address. ec2 instances in the same VPC can talk to each other on their private network.

now you have a problem, cause there is only 1 port 22 (ssh port) on each ec2 instance, and 10 containers…
they all can’t map at once.


(Guizello) #3

Thanks very much for your help !

Yes that’s what i figured it out … So the only way to access one of the container would be to ssh the EC2 instance as a bridge and then docker exec ?

guizello


(Sam) #4

probably… now… given that it is such a pain, why do you need to get into the container?
i would seriously examine that design…


(Guizello) #5

My bad…
I just think that i don’t have to access the container from outside but from an EC2 instance that is in the VPC…
This Instance is in a public subnet and i want these instance to ssh to a container from another instance in a private subnet.
And this is of course possible i think.


(David Maze) #6

You can use the totally normal docker run -p option to expose the per-container ports to the outside, and then have clients use ssh -p to provide an alternate port. Or, since you’re on EC2, having used docker run -p to expose a port, set up a load balancer to provide an external host name mapping from external port 22 to the per-service port. (If you have a lot of these, setting it up in Kubernetes is somewhat straightforward, once you get over the initial Kubernetes hump.)


(Sam) #7

as David said, you CAN do it… figuring out the port number to use for a specific container on an ec2 instance will be a challenge… (they call can’t use the default ssh port)

again, ssh into a container is really an anti-pattern for docker… and you will have to do unnatural things to make it work… I would examine if there alternative mechanisms to avoid this approach…


(Guizello) #8

Yes thanks both, i understand the way i could have done it. Really appreciate your help !!!

The thing is now that, as i said i have to ssh my container (which is part of an instance on a private subnet) from my instance (which is part of my public instance).

So i have to deal with my public instance to ssh my private subnet instance’s container.


(Guizello) #9

Hi again everyone, so i’m trying to ssh a container of an EC2 instance from another EC2 instance.
These 2 instances are part of the same network.

Example ssh on container from the instance :

ubuntu@ip-10-0-1-190:$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ed1cb98857f ubuntu:latest “/usr/sbin/sshd -D” 3 seconds ago Up Less than a second 0.0.0.0:32768->22/tcp elegant_turing

ubuntu@ip-10-0-1-190:$ ssh 10.0.1.190 -p 32768
Warning: Permanently added ‘[10.0.1.190]:32768’ (ECDSA) to the list of known hosts.
ubuntu@10.0.1.190’s password:

This is fine.

But from another instance, no ssh connection :

ubuntu@ip-10-0-1-18:$ ssh 10.0.1.190 -p 32768 -vvvvvv
OpenSSH_7.2p2 Ubuntu-4ubuntu2.4, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /home/ubuntu/.ssh/config
debug1: /home/ubuntu/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving “10.0.1.190” port 32768
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to 10.0.1.190 [10.0.1.190] port 32768.

Do you guys would have a suggestion ?
Because these 2 instances can communicate :

ubuntu@ip-10-0-1-18:$ ssh 10.0.1.190
Warning: Permanently added ‘10.0.1.190’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1049-aws x86_64)

29 packages can be updated.
0 updates are security updates.

Last login: Mon Jan 29 08:44:50 2018 from 10.0.0.111
ubuntu@ip-10-0-1-190:~$

Thanks !
Guillaume


(Sam) #10

does your vpc security allow connections on port 22 from the network of the other container?
usually I set this to allow ONLY connection from my workstation outside amazon (not anyone)


(Guizello) #11

Thanks for your help !

Security Groups have been added to be able to allow this (inbound/outbound) inside the subnet.

In my example 10.0.1.190 and 10.0.1.18 are able to ssh themselves.

When i create a container on one of these instance, i considered the container is part of the instance through the port i dedicated to the container.

So when i try to ssh on the instance , i should have same security rules no ?
Do you think i have to add something else ?

Guillaume


(Sam) #12

well, its always fun when the containers have their own addresses, that Amazon doesn’t know about and so their security rules will exclude them .

the only reliable way is to map the container port to a host port, and then ssh to that… of course now you have the mapped port to container problem


(Guizello) #13

Oh ok ! i guess you mean this type of action https://docs.docker.com/engine/userguide/networking/default_network/binding ?
I’m gonna try it.

Thanks !


(Sam) #14

(-p hhh:ccc is a docker run option)

right, container 1 maps host_port(1022) to container 1 port(22) -p 1022:22
container 2 maps host port (2022) to container 2 port 22 -p 2022:22

etc…

then from the outside you would ssh to the ec2 instance address, plus the appropriate containers mapped port…

to ssh to container 1, ssh user@ec2_instance_ip -p 1022

etc


(Guizello) #15

Allright, that was not easy to check directly but the problem was that i only opened ssh port from my AWS SG.
So that when i added the good port range security, all is ok !

Thanks for your help again !
Guillaume