Connect via SSH to Ubuntu container at Synology

I am learning Docker and have taken my first steps.

My goal is to put a web app (Pyhthon + Mariadb) that I have developed locally on my laptop online (docker container @ Synology NAS).

On my laptop I have Docker-desktop running and on my NAS the Docker-app is running with a few containers. I have a question about the latter.

One of the containers I’m running is instantiated from the official Ubuntu image. I can also work with it via the terminal in the Docker-app - so far so good.

But I want to ssh to it from my laptop to work with it and that doesn’t work. I have installed ssh, but there are no ports available to connect to. I think I’m doing something wrong when creating/starting the container (associating container ports with local ports?). I’ve been working on it two days now and I’m stuck, so I need some help here …

This is my first post in this forum and I hope I have followed the forum rules in this way

image

Why do you really want to SSH directly into the container? If you can access the host, you can enter the container using docker exec. Never install SSH in a container unless it is really necessary, which usually is not.

See SSH to container using host SSH service

If you are the only one who uses Docker on that machine, you can also use docker exec on your local machine after settig an SSH Docker context. See How do you create a context for a remote tls daemon? - #2 by rimelek

Its important to understand that a docker container, is NOT a VM running an OS that you can do with what you want (well you can, but not intended)

A container is your application, with only the necessary components to run it, so when you first started your application, lets say, a webserver, then you open port for web traffic to that container, and let it do its job, being a webserver

Thanks for your responses. I need to do a bit of research on what you guys are pointing out, but the big picture is clear to me.

I have a clear idea of ​​where I eventually want to go: an image that I have to maken myself on my laptop with a working web application (core components: Python, Flask and Maria db). I’ll have to make my own Dockerfile for that + provide a port to make the application accesible and volumes to persist data generated with the application.

I then use that Dockerfile to instansiate a container on my Synology (my production environment) for “my customers”.

That is quite a project and I want to het there step by step. My concern now is to better understand Docker and I don’t understand the Ubuntu image. I can run the container (on my Synology), but what can I do with it? I can only access it via the terminal of the Synology Docker app (and not via a session from, for example, my laptop). How can I make the container accessible to others (developers and end users) and how to persist data (via mounting volumes…)?

Maybe better not to use the app on the Synology and use the CLI (eg for the comaand docker exec as mentioned by Akos)?

Maybe I’m thinking wrong and is my useCase not suitable for a solution with Docker. Lots of new ideas and insights. I think it all has to fall into place a bit. I am curious about your reactions and suggestions.

UPDATE

I successfully started a session using docker exec:

  • SSH into Synology

  • sudo docker ps -a à “name of the container”

  • sudo docker exec -t -i “name of the container” bash à root @ “name of the container”

But the questions about the approach to “my issue” have not yet been resolved :wink:

Hi again.

First, combining all those things in 1 container is not best practice, instead you want 1 application pr image/container, meaning that your web app should be in 1 container (maybe called app) and you make another one for your database (db), and those 2 containers can communicate, so you fx. can update your app without breaking your database.

There is different approaches to the, share a container, normally you dont want your developers or users to work IN the container, instead you might want to build and image, where the first time the container starts, it sync from git ( or something like that ), that way it will always contain the newest code from git.

There is also a way to let a build tool, or docker hub, build your image with the new code whenever there is an update to the git repo.

Hope that makes sense :slight_smile:

What do you mean by session? If you want to access it from the terminal, that is what the SSH context is for. If you want to access the website, you need port forwards.

Since you know the way (volumes) I can’t tell you more without a more specific question. This is something that you can find on this forum (search icon in the upper right corner) or in the documentation.

I also agree with everything that Martin wrote. Although I would not update the content of a container automatically since the required linux packages or settings could be different for different source codes which makes it harder to handle. But this is a valid approach to start on the way to understand Docker.

If you have more questiones on these topics, I recommend you to open a new topic (unless you find an answer in an other topic or or in the docs) because these are nothing to do with the SSH connection which the original question was about.

Thanks for your responses and apologies for this late reaction.

Your comments really helped and my research now focuses on ‘volumes for exchange’ and ‘cooperating containers’ (Docker compose). The latter seems to be the right way to go. Especially if I later want to make my Flask application more robust with other services like Gunicorn and Nginx.
The idea to let the container get the latest version of git at boot time is also an eye opener. I’ll let the buildtool sit for now, since I don’t see the need for it at this point for my project (maybe later).

I shall close the topic and will open a new one if needed. Thanks again for your help. Regards, Ko