I recently mis-configured one of my containers and it killed my entire computer. I can reproduce this at will
Set the number of download threads too high on the RDT-Client container and start downloads
All other containers become unresponsive
The Docker Engine becomes unresponsive
I lose VNC, SSH access to my host OS (macOS)
Plex running on the host OS becomes unresponsive.
I have allocated 4/10 CPU cores and 6/16GB RAM for docker (as you can see in the screenshots below, both become 0/0). Isn’t the entire point of using containers to isolate such issues to within the container or worst case within docker?
Docker loses track of CPU/RAM, only recovery is re-starting the docker engine -
Since everything is running in a virtual machine, it is much harder to “kill the whole host OS”, but it is not impossible if you allow mounting system folders and the container does something dangerous. In your case, setting the number of download threads too high, I’m not sure how that affected the host, but I can imagine writing the disk intensively so even if you had enough CPUs and RAM left for the host, the disk usage could affect the host as well.
That is just one idea. There could be other reasons, but I’m not a macOS expert. How virtual machines can affect the host can also depend on the virtualization and how the OS manages resources.
Which virtualization method did you use? It can be changed in Docker Desktop settings.
It was definitely the disk write performance. I read more about the inherent issues with disk write performance on macOS and Windows. I moved to Orbstack and the performance with that is much better.