Use the equivalent of "docker run" or "docker exec" from within a container

Issue type: best practice advice and/or basic instruction needed or maybe a feature request

OS: varies. assume Ubuntu 14.04

Docker version: Using 1.11 but upgrading soon to 1.12 for new swarm stuff

This must have been asked many times, but I can’t find any of them. The search terms are used for too many other things.

I need to be able to run apps in containers as system calls from apps running within other containers.

That is, I will have a process/app running in container A. Depending on what happens, that process might need to run an app in container B (or C, D, etc.).

The app in container B isn’t a daemon and I greatly prefer it not need to be.

I can use rsh/ssh:

container-A: ssh B-network-id “date”

…but that seems inelegant.

What I would like is a system call something like:

containerA: docker-network-run B_image command args
or
containerA: docker-network-exec B_net-id [or B-name] command args

In the second case, I would start B initially something like:

host: docker run -itd B-image /bin/bash

…so it would be already running. But, I really prefer ‘run’ over ‘exec’.

Is there a better way to do this than by using rsh/ssh? I dread having to keep up with keys and such.

Sounds like you’re really interested in some type of RPC. e.g., golang net/rpc or gRPC. That’s what I’d do: Have the downstream services which you need to invoke running as RPC listeners on the same docker network (so that Docker’s DNS/container-name-based service discovery will work), and invoke those methods as needed.

If RPC is overkill, you could just have HTTP servers as well. Or, pull jobs off some kind of messaging queue. It all depends on what your project requirements are (how frequently you’ll need to invoke, etc.) But at any rate, Docker’s built-in container networking makes container-to-container communication so simple it’d be silly not to use the network.

If you’re concerned about others accessing the network calls when they shouldn’t be, they should only be available when containers are on the same docker network. No ports are exposed to the outside world unless you explicitly --publish them.

1 Like

Thanks! I’ll look into RPC. It looks like it could be useful. I had thought of using HTTP, too, so it’s nice to know that is also reasonable.

For some of the jobs, we’re using Slurm.. We just make a little cluster of them, all running the slurm daemon, with one as a head node. For these specific jobs, a real cluster-ready scheduler makes sense. Plus, with swarm, they can be on different machines and still just look like one big network.

And, yes, I definitely plan to make use of the docker networking. I especially like that the ports can be made visible only within.

Again, thanks!

1 Like

I’ve learned grpc enough to have two containers act as server and client for one of the back-end programs we use. It’s not bad to get going, and we’ll use it because it works well and is really nice to use. But, there is a bit of work involved setting things up. So, it would be really awesome for a setup like ours if one day Docker could do all that automatically somehow. Maybe there could be a list of executables and the ids of the networked containers that serve those executables. I expect that won’t be trivial to implement, so I’m not trying to pressure anyone. And, grpc does the job quite well.

Some back-story: We provide online molecular modeling services. So, our back-end is a nice mess of executables of all styles, ages and levels of refinement (sometimes, I’m happy if it works at all, ref http://xkcd.com/1742/). Because of this, I really like Docker’s ability to separate competing dependencies, potentially unstable code, etc. But, the apps also have to interact automatically: think bit-flipping Rube Goldberg Machine.

Again, thanks for the tips!

PS: One thing I really like about grpc is that I can use a libraries in remote containers and in different languages. Very useful.

1 Like

Do you know which one receives the grpc requests when it was starting container request.i cant search them.

I don’t think I understand your question. Can you rephrase it?

This is my question :Sth about docker's grpc receiver
Can you help me?

I just barely have grpc working, so I might not be the one to ask, but… Let me make sure I understand the question.

You have several grpc servers. When a client makes a request, you want to find out which of the available servers responded to it. Is that correct?

This seems obvious, but is it possible that “bbf7814a380066482e8e90baa…” is the ID of the server that responded? Docker tends to call its servers by names like this. If it isn’t that, then you’re outside my current knowledge of grpc.

I cant find any using with /types.API/CreateContainer.
do you know which one receives the request?
I try to know how a container be started.

I think I get it. You just need to ensure that a grpc server is actually running. Yes?

You can’t start a docker container from within another docker container. So, the server has to be already running and accepting grpc requests.

There are various ways to do this. In all, essentially, you have to start the container in such a way that it stays running all the time, waiting for a grpc call.

  1. Serve grpc from some other containiner that is running another server that stays up all the time. We serve some grpc from a container also running the slurm control daemon (for submitting jobs to a cluster).

  2. Start the grpc server in ‘blocking’ mode… I think this is the correct term. See the link at the end of this for a link. Here’s a relevant quote: “Because start() does not block you may need to sleep-loop if there is nothing else for your code to do while serving.” Essentially, after starting the server, you start some loop of code that never ends, such as (in pseudocode) ‘while 2 equals 2 {sleep 10 seconds}’. There are certainly more elegant versions of that if you look around. http://www.grpc.io/docs/tutorials/basic/python.html#creating-the-server

Was that what you’re looking for?

And… like I said before, I don’t know how to tell which server responds to a specific grpc request. That would certainly be a grpc thing, and not a docker thing.

You need to use docker to figure out which containers are running.

But the function entry of grpc can be find i think.I just want to find this.