First off… Docker does not prevent you to run multiple processes. It’s not “forbidden”. In fact you can launch (with CMD) a bash script that spawns multiple subprocesses and all them will be running in the same container.
Having said that, docker’s philosophy is not to be a “full server”. You have other free virtualization solutions like LXC/LXD more biased to run “virtual servers”. All the opposite: Docker’s philosophy is precisely to “isolate” one service from the other.
And often, one service is one process. Not always. But often.
For example, if you have a “classic” server with apache + mysql + php-fpm + cron + several-others, etc. the typical problem arises when you have a secutiry bug-fix that you urgently need to fix and to do that you need to upgrade your libraries, say you “need” to upgrade your system to secure apache, but that system upgrade breaks mysql.
Docker precisely comes to solve that specific problem: Run apache in it’s own “library-environment”, then mysql in it’s own separate different set of libraries, etc. Then, the day you need to upgrade the environment in which the apache lives, you can remain safe that mysql will not break. Not only that: You don’t need even to rebuild the image or even you don’t need to stop the mysql container. You just stop, rebuild and run again the apache “part” of your system.
Docker works very well with it’s internal networking: Run apache in container 1 (say the container’s name is “nice-apache”, mysql in container 2 (say it’s name is “cool-mysql”), whatever-other-daemon in container 3 (say it’s name is “fantastic-other-service”)… attach them to the same network and then just ping by name: Configure your program to access “cool-mysql” instead of “127.0.0.1” and it works.
Since it works off the shelf, there’s no need to provide a mechanism to “mix up” all the things that we wanted docker to separate: each service (I’m not saying each process, I’m saying each service) in a separate run-context.
Just… instead of jumping into the container and start/stop service1, service2, service3, do this: Go to the docker-host and run container1, container2, container3. To start/stop it’s the very same effort (go to a bash and launch something; but instead of a service within a container, a container within a host). Do not hesitate to have a host with hundreds or even thousands of containers if needed. Each container will not add overhead to the CPU time-switching with respect if they were native proceses in the host. So if you imagine a “server with 100 processes”, don’t hesitate to run a docker-host with 100 containers if needed.
Finally, you asked for pointers…
In the event you haven’t done so, I’d suggest to take the whole “12 factor” by Adam Wiggins into consideration when designing the architecture for modern systems: https://12factor.net/ but chapters VI, VIII and IX all together may add light to your view.
And even more pointers: when you end that initial reading, you can follo on with the “beyond 12 factor” by Kevin Hoffman which states 15 factors to take in consideration for modern applications: https://www.cdta.org/sites/default/files/awards/beyond_the_12-factor_app_pivotal.pdf - chapters 7, 12 and 13 can also add light.
If you put aside the “needed work” to setup 8 Dockerfiles if you have 8 services… what advantage gives it to you having them all mixed up instead of cleanly separated?