Making service connect and stay up through docker network

TLDR;
current container infrastructure:

  • service_pyapp
  • service_LegApp <- this guy compiles then closes
  • service_nodeweb <- want to be able to make child process requests to terminal on LegApp
  • service_db

When LegApp boots it takes a while due to compilation of lot of c,c++,and fortran.
Everything compiles then closes, leaving me unable to call the executables I just compiled.
Additionally I want to make calls form the web server to the LegApp (which is ubuntu based)
How do i tether these together?
Does anyone have an example where they have done this?
Looking for docker solution so that i can add more “LegApp” type services.

A container exits as soon as there is no foreground process. After the compilation you have to start a service that continues running.

I get that conceptually, but I don’t see a clear path… I have found several ways to do this, but the issue is that i don’t know which one to pick.

Methods to Keep host running, really seem to fall into 2 categories:

  1. call job that will not stop on ubuntu container: description here the container just keeps running because a program was called that is persistant

  2. call job that will not stop on ubuntu container: example A write a useless log
    CMD tail -f /dev/null

  3. service in a service: description here the actual service could have some kind of http listener installed on it and the ports are exposed so that the http_listener on the LegacyApp connects to the nodeweb app. To do this I would need to install do one of the following:

  4. service in a service:(example A): install node express onto ubuntu container and configure to listen on port then pass commands using child process or something similar

  5. service in a service: (example B):create a microservice that boots within the ubuntu service within the cpp (ie cpprestsdk) and configure http_listener on ip/tcp port

None of these seem very clean, or logical. I am sure there are others that have a similar config. Was thinking it could do something to automatically make one service connect to the other when it starts using the docker compose network it
(somehow in docker compose make this relationship)

  1. nodeweb starts and is listening on port and internally @ https://nodeweb:6000
  2. LegApp starts, compiles, and connects to nodeweb as client waiting for commands exposing http://LegApp:4000 for private connection. Automatically porting to running process on service CMD:/bin/bash

Depends on what your service is intended to do. In most of the cases you compile the code when you build the image, then use the image to build a container when you need the service it contains.

I think that there is a gap in the system here, because I can create a container do some dev/test iterations then, then a service. Most of the more complex production grade deploys require multiple services within an app. Which is Great because it shows how docker can be used for a diversity of workloads. Meaning Cpp compilation,testing, python servers, node servers, mssql db’s, whatever!

But during development i need to be able to map a bunch of different drives to run tests and integrate with other applications. I often do a lot of volumes which point to local data, if I have 10 volumes to mount and 3 services it is a total pain to launch containers for those 3 services. So it would be great to be able to do docker-compose up and not have it just quit.

in other words, i can run docker-compose and get good launch but then it closes unless a job is still running, eliminating the opportunity to do a -dit with exec /bin/bash and jump into my container for debugging and development. Does K8’s help improve the ability to keep a service like this running? The other option is running a small server on the backend container, which is reasonable but I would really like a more docker based solution.

A docker container is just an isolated process (sometimes incl. child processes) . no process == no container. A container is not a vm.

What tekki tried to point out is that people usualy use multi-stage builds to perform build task in the first stage and copy the artifacts from the first stage to the next stage. Though, this seem not to be what you want. But then again, even though you wrote a lot… the big picture is still unclear…

If you are in development and need a container to continue running you can start a dummy process in it. Add the following command to the service in your compose file:

command: ['tail', '-f', '/dev/null']

i had seen that technique before but there is a second part that is critical which most explinations overlook

  1. do not use entrypoint because that will override the command in your docker compose.
  2. to login to the container you must use
#To enable login to container left running
docker exec -it <my container started using docker compose up>  /bin/bash

@tekki thanks for the collab!