can I start/stop Linux services running on the host Linux from within a docker container?
On my former platform (without Docker) I had a Web application to configure and start/stop standard Linux host services such as: dhcpcd and Ethernet interfaces, timedatectl, AVAHI…
Few days ago I did the exercise to port the Web app to docker container and now wondering how to enable this feature in my containerized web app.
Any hints are appreciated
Thanks for your repsosne. SSH into the host is not an option. The device is kind of industrial embedded gateway and it’s supposed to be configured via web by none Linux experts
e.g. the web configuration of host eth0 looks like this. By clicking the save button I re-write /etc/dhcpcd.conf and restart the dhcpcd service by executing sudo systemctl restart dhcpcd.service.
We use Docker or other kind of containers to isolate our processes from the host. Depending on how you need to stop a process on the host that might be impossible. There ar some options but I think those wont help you.
You containerize a web app which means the app will have its own mount namespace so many operations that is based on the filesystem like using unix soxkets would not work unless you mount that socket with proper privileges and also have a client that can use it. You can’t execute a command on the host when you are in a container because you might not even have that command in the container and even if you have, you would run it in an isolated environment not affecting the host at all.
That is why @bluepuma77 recommended SSH-ing to the host because SSH can be used to access other physical hosts too. It is common practice to use it for running remote commands, but then you need ssh client in the container. Some programming languages support SSH without directly running the ssh command, but if the host does not run the SSH daemon or can’t be configured to accept the connection, that’s not an option indeed.
However SSH is based on network communication and that could be an API running on the host without containers that is listening on a port that can be accessed from the container but not fro the outside world. For example it could listen on the default Docker bridge’s gateway. Then you can send a rest API request to the API on the host which would then stop or start the service on the host. I guess that would not be a big help because then all you would do is dockerize a frontend that does nothing except sends request to its own backend in the container to forward it to the host where all the important logic is implemented.
You could also send signals to processes. For example docker stop just sends a TERM signal to the process in the container (or other signal depending on what STOPSIGNAL was configured for the container) when you want to stop a container. So you could start the webapp container without asking for its own process namespace and use the “kill” command to send TERM signal to the process if you know exactly what process you need to send that request to. Which you might not so first you would need to implement a feature to determine what you need to stop.
In case you know the process id and if it is 123:
kill -SIGTERM 123
But you also need to be sure that the TERM signal is what the process requires to stop. It is usually the case but it could be different. If I remember correctly, Apache HTTPD requires WINCH not TERM. If you want to reload something, that usually requires the HUP signal.
So I would say there is nothing wrong with running a webapp on the host without containers. Especially not when you want to access a host. I would not isolate my app just to break the isolation immediately, but having a rest api on the host and a more secure web interface for the users could work too, so it is up to you.
Using the host’s process namespace (meaning: not using process namespace) would not work with Docker Desktop since that would share the process namespace with the host running in the virtual machine of Docker Desktop.
I’m not sure I understand what you mean. I’m pretty sure you know what you are talking about but the quoted sentence is confusing to me. Docker Desktop supports using host.docker.internal which would access the host’s localhost. Is this what you meant?
Thanks for your comprehensive summary
My host runs a ssh server but it’s not meant to be used by end users. However I can use it internally in my software to create a connection from a container to the host to execute a command.
Underneath please find my preliminary plan how to structure Docker container on my embedded gateway :
Frontend : Nginx Reverse Proxy and React Web App
Backend: Python with Flask + REST API and Gunicorn (with endpoints to set IP Config, AVAHI, start/stop/restart systemd services and configure sensors attached to the gateway
*Sensor: Sensor data gathering software in Python
Would “dbus” be an alternative way to communicate between Docker container and host to start/stop those services? Then host /etc folder just need to be mounted into the container to have access to system configuration files
For this test approach I am using RASPI4 hardware with Debian Bullseye and Kernel 5.15.84.
My Docker Container based on python:3.10-slim with few extensions
I fully agree with your opinion about breaking the Docker isolation principle, but espically with embedded Linux gateway projects there are many use-cases you won’t have remote access to the IoT device.
That’S why users need a simple web UI to allow them basic system changes to be initiated.On the other hand I would preserve scalability of Docker container
I will give it a try and see how far I’ll come and hopefully get back with results soon
Mounting dbus socket:
I presume I have to mount the folder with dbus user session socket from my Debian host into the container.
good tip! It might solve few other minor features I need to support in my container:
I also need access to read
RASPI4 Serial#: f = open(‘/sys/firmware/devicetree/base/serial-number’,‘r’)
CPU CORE Temperature: f = open(‘/sys/class/thermal/thermal_zone0/temp’,‘r’)
As I have no clue at moment how to access the dbus socket, I will optionally try the suggestion to establish a SSH conenction from container to host with Python Paramiko library: Python SSH Tutorial | DevDungeon
There is also an example how to run commands over SSH. Then I could simply adapt my current code which simply invokes shell command with Python integrated subprocess.popen library.
ret = shell_exe("sudo /usr/bin/systemctl restart dhcpcd.service")
ret = shell_exe("sudo /usr/bin/ip address flush eth0")
That is the “easy way”, which was implemented early in Docker, but you can use capabiltiies instead Capabilities | dockerlabs
and allow only what you need.
To be honest I don’t know what Paramiko supports and what doesn’t, but if you want to be more secure, you could create a custom SSH shell on the server using SSH keys and the “command” parameter in the authorized_keys file so you can create a single shell script which takes arguments like “restart-dhcp” and if you get anything that is not supposed to be executed though that SSH connection, you do nothing in the script. That way even if someone stoles the SSH key or gets into the container through the webapp somehow, they can’t get full access. This is how Git servers limit what you can do when using SSH not HTTPS connection.
and use this command from the container: ssh <user>@<hostname> restart-dhcp and use “case” in the shell script to decide what you actually need to run or you need to refer to the variable directly in the shell script, but than you cn’t run the script without SSH unless you set the variable manually. All solutions are correct, it depends on what you prefer.