Start/stop Linux host services from Docker container


can I start/stop Linux services running on the host Linux from within a docker container?

On my former platform (without Docker) I had a Web application to configure and start/stop standard Linux host services such as: dhcpcd and Ethernet interfaces, timedatectl, AVAHI…
Few days ago I did the exercise to port the Web app to docker container and now wondering how to enable this feature in my containerized web app.
Any hints are appreciated

You probably use the command line to do so, so you would need to ssh into your host by using its hostname or IP address.

Note that accessing hosts localhost from within a container usually only works with Docker Desktop.

Thanks for your repsosne. SSH into the host is not an option. The device is kind of industrial embedded gateway and it’s supposed to be configured via web by none Linux experts :slight_smile:
e.g. the web configuration of host eth0 looks like this. By clicking the save button I re-write /etc/dhcpcd.conf and restart the dhcpcd service by executing
sudo systemctl restart dhcpcd.service.

We use Docker or other kind of containers to isolate our processes from the host. Depending on how you need to stop a process on the host that might be impossible. There ar some options but I think those wont help you.

You containerize a web app which means the app will have its own mount namespace so many operations that is based on the filesystem like using unix soxkets would not work unless you mount that socket with proper privileges and also have a client that can use it. You can’t execute a command on the host when you are in a container because you might not even have that command in the container and even if you have, you would run it in an isolated environment not affecting the host at all.

That is why @bluepuma77 recommended SSH-ing to the host because SSH can be used to access other physical hosts too. It is common practice to use it for running remote commands, but then you need ssh client in the container. Some programming languages support SSH without directly running the ssh command, but if the host does not run the SSH daemon or can’t be configured to accept the connection, that’s not an option indeed.

However SSH is based on network communication and that could be an API running on the host without containers that is listening on a port that can be accessed from the container but not fro the outside world. For example it could listen on the default Docker bridge’s gateway. Then you can send a rest API request to the API on the host which would then stop or start the service on the host. I guess that would not be a big help because then all you would do is dockerize a frontend that does nothing except sends request to its own backend in the container to forward it to the host where all the important logic is implemented.

You could also send signals to processes. For example docker stop just sends a TERM signal to the process in the container (or other signal depending on what STOPSIGNAL was configured for the container) when you want to stop a container. So you could start the webapp container without asking for its own process namespace and use the “kill” command to send TERM signal to the process if you know exactly what process you need to send that request to. Which you might not so first you would need to implement a feature to determine what you need to stop.

In case you know the process id and if it is 123:

kill -SIGTERM 123

But you also need to be sure that the TERM signal is what the process requires to stop. It is usually the case but it could be different. If I remember correctly, Apache HTTPD requires WINCH not TERM. If you want to reload something, that usually requires the HUP signal.

So I would say there is nothing wrong with running a webapp on the host without containers. Especially not when you want to access a host. I would not isolate my app just to break the isolation immediately, but having a rest api on the host and a more secure web interface for the users could work too, so it is up to you.

Using the host’s process namespace (meaning: not using process namespace) would not work with Docker Desktop since that would share the process namespace with the host running in the virtual machine of Docker Desktop.

I’m not sure I understand what you mean. I’m pretty sure you know what you are talking about :slight_smile: but the quoted sentence is confusing to me. Docker Desktop supports using host.docker.internal which would access the host’s localhost. Is this what you meant?

Thanks for your comprehensive summary
My host runs a ssh server but it’s not meant to be used by end users. However I can use it internally in my software to create a connection from a container to the host to execute a command.
Underneath please find my preliminary plan how to structure Docker container on my embedded gateway :

  • Frontend : Nginx Reverse Proxy and React Web App
  • Backend: Python with Flask + REST API and Gunicorn (with endpoints to set IP Config, AVAHI, start/stop/restart systemd services and configure sensors attached to the gateway
    *Sensor: Sensor data gathering software in Python

Another question:
Would “dbus” be an alternative way to communicate between Docker container and host to start/stop those services? Then host /etc folder just need to be mounted into the container to have access to system configuration files

A lot of people think they can just use localhost within container to connect to localhost of their host.

I wanted to say that only works with Docker Desktop, as usually containers are fully isolated with a regular Docker install.

There might be some options to interface with the host system via dbus

  1. Mounting dbus socket
  2. Access host by IP
  3. Use host networking
  4. Use privileged mode

Disclaimer: nothing tested by myself

They all kind of go against the principle of isolation. Make sure your container is secured so no one can get into the container and then access your host.

For this test approach I am using RASPI4 hardware with Debian Bullseye and Kernel 5.15.84.
My Docker Container based on python:3.10-slim with few extensions

I fully agree with your opinion about breaking the Docker isolation principle, but espically with embedded Linux gateway projects there are many use-cases you won’t have remote access to the IoT device.
That’S why users need a simple web UI to allow them basic system changes to be initiated.On the other hand I would preserve scalability of Docker container

I will give it a try and see how far I’ll come and hopefully get back with results soon :wink:

Mounting dbus socket:

I presume I have to mount the folder with dbus user session socket from my Debian host into the container.

Access host by IP
Use host networking
Use privileged mode

Is this privileged mode really required?

I know that with privileged mode the container can read host sensors, like temperature.

Those are different options you could try, they do not need to be combined.

good tip! It might solve few other minor features I need to support in my container:
I also need access to read

  • RASPI4 Serial#: f = open(‘/sys/firmware/devicetree/base/serial-number’,‘r’)

  • CPU CORE Temperature: f = open(‘/sys/class/thermal/thermal_zone0/temp’,‘r’)

As I have no clue at moment how to access the dbus socket, I will optionally try the suggestion to establish a SSH conenction from container to host with Python Paramiko library: Python SSH Tutorial | DevDungeon
There is also an example how to run commands over SSH. Then I could simply adapt my current code which simply invokes shell command with Python integrated subprocess.popen library.

ret = shell_exe("sudo /usr/bin/systemctl restart dhcpcd.service")
ret = shell_exe("sudo /usr/bin/ip address flush eth0")

Of course not :slight_smile: I also meant from the container.

That is the “easy way”, which was implemented early in Docker, but you can use capabiltiies instead Capabilities | dockerlabs

and allow only what you need.

To be honest I don’t know what Paramiko supports and what doesn’t, but if you want to be more secure, you could create a custom SSH shell on the server using SSH keys and the “command” parameter in the authorized_keys file so you can create a single shell script which takes arguments like “restart-dhcp” and if you get anything that is not supposed to be executed though that SSH connection, you do nothing in the script. That way even if someone stoles the SSH key or gets into the container through the webapp somehow, they can’t get full access. This is how Git servers limit what you can do when using SSH not HTTPS connection.

OK just for comprehention:

  • SSH Server on host with pubkey authorization (already running with my setup)
  • SSH client in my container. I just added openssh-client into my container and copied a pair of test keys into it for testing
  • I add a ~/.ssh/ to check allowed commands to be executed
  • restrict execution of allowed_commands: command="~/.ssh/" [ssh-key]

That’s okay if you handle the $SSH_ORIGINAL_COMMAND in the shell script.

as far as I now this ENV VAR is set automatically by sshd:

man sshd

The command originally supplied by the client is available in the SSH_ORIGINAL_COMMAND environment variable. Note that this option applies to
shell, command or subsystem execution.

Yes, that’s why I wrote that you need to handle it, not “set it” :slight_smile: Maybe “handle it” was not too specific. The point is that you can add it as a parameter

command=“~/.ssh/ $SSH_ORIGINAL_COMMAND”

and use this command from the container: ssh <user>@<hostname> restart-dhcp and use “case” in the shell script to decide what you actually need to run or you need to refer to the variable directly in the shell script, but than you cn’t run the script without SSH unless you set the variable manually. All solutions are correct, it depends on what you prefer.

it works like a charm.
Now I will try to make this approach more secure:

  • on the host I will add allowed commands. There is support by adding “.onlyrules” file
  • in the container:
    • I disable priviliged mode
    • I enabled capabilities (need to dig a bit deeper into the concept)
    • does it make sense to run this container as specific user other than “root” (user namespace feature)?
    • is it possible to login into a Python3.10:slim container / container shell securely?
    • can the communication between container be encrypted or even use TLS?