Making service connect and stay up through docker network

I think that there is a gap in the system here, because I can create a container do some dev/test iterations then, then a service. Most of the more complex production grade deploys require multiple services within an app. Which is Great because it shows how docker can be used for a diversity of workloads. Meaning Cpp compilation,testing, python servers, node servers, mssql db’s, whatever!

But during development i need to be able to map a bunch of different drives to run tests and integrate with other applications. I often do a lot of volumes which point to local data, if I have 10 volumes to mount and 3 services it is a total pain to launch containers for those 3 services. So it would be great to be able to do docker-compose up and not have it just quit.

in other words, i can run docker-compose and get good launch but then it closes unless a job is still running, eliminating the opportunity to do a -dit with exec /bin/bash and jump into my container for debugging and development. Does K8’s help improve the ability to keep a service like this running? The other option is running a small server on the backend container, which is reasonable but I would really like a more docker based solution.