I have host in Amazon EC2 (ubuntu 20.04 based) where I run a docker-engine. The host allows SSH because it’s open to our IP address.
Inside my office, when I want to control the docker-engine, I set my DOCKER_HOST variable and can do all the work remotely.
But it’s extremely slow as each command needs to do the full negotiation. Each time I do a docker ps, or volume list or image list or network list takes its ouwn 3 or 4 seconds.
Nevertheless if I open a shell and ssh into the machine, the first time it takes 3 or 4 seconds and once in, any keystrocke is echoed in a matter of just a few milliseconds.
If I plan to do “many commands” in a single session, but need to do them remotely (for example for
docker build here
, not there`), then I’m running very slow for the 3-4 secs delay each time I issue a docker command.
Discarded first-approach solution
I know if I use the “ssh tunnels” and set the engine to listen to the “tcp sockets” instead of the local linux sockets, I could just tunnel-in and have the DOCKER_HOST pointing to a tcp local port (forwarded by the ssh engine transparently to docker) instead of pointing the env var to instruct the docker client to go via the ssh protocol.
But this would require me to re-configure the docker-engine to listen to tcp.
Can I tell the docker client to re-use the ssh connection over and over again saving me from the negotiation time, and then issue commands that run remotely that are like “immediate” in execution?
Alternatively: can I tunnel-in via SSH and forward docker-client to docker-engine commands other than tcp (maybe like explosing a linux socket instead of a TCP socket or similar)?