Run command in stopped container

Lets say I ran the following command:

docker run ubuntu touch /tmp/file

And now I have stopped container with file in tmp. How can I run other command in this container?

I know about commit but I do not want to create new image for every new command.

It would be nice to have ability to create named container, and run commands inside it:

docker run --name mycont ubuntu bash
# do somethig
# exit
docker attach-to-stopped-container mycont bash
# continue your work
2 Likes

in the bash example, because you’re starting the same program, you can do docker start -ai mycont

but in general, no, you’d need to start the container again, (which resumes from where you left off) and then docker exec something in it.

2 Likes

Oh, no. It can run only the same command:

$ docker run --name cont3 ubuntu echo foo
foo
$ docker start -a cont3
cont3
foo
$ docker start -a cont3 echo bar
2014/11/07 19:57:22 You cannot start and attach multiple containers at once.
$
1 Like

yup, exactly - thats why I mentioned docker exec

Sorry, but I can’t figure out how to use it:

$ docker run --name mycont4 ubuntu echo one
one
$ docker exec mycont4 echo two
2014/11/10 13:24:06 Error response from daemon: Container mycont4 is not running
$ docker start mycont4
mycont4
$ docker exec mycont4 echo two
2014/11/10 13:24:24 Error response from daemon: Container mycont4 is not running
$

Ah, I think I see what is happening. In your example, the echo one command exits immediately, which stops the container.

docker exec only works with currently running containers.

If you want to take the resulting image, and run another command, you will need to commit that and start another container.:

$ docker run --name mycont4 ubuntu echo one
one
$ docker commit mycont4 myimage5
015b3a3d3844b0c3638ab0e07eabd6b2ccdd1768667bc8635051488c6dcec087
$ docker run --name mycont5 myimage5 echo two
two

Since the echo example might be skewing a bit, docker exec may still work for you. For example:

$ docker run --name mycont4 ubuntu sleep 10 # let this exit
$ docker start mycont4 $ this starts sleep 10 back up in the background
mycont4
$ docker exec mycont4 ps faux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         8  0.0  0.1  15568  2112 ?        R    08:11   0:00 ps faux
root         1  0.7  0.0   4348   648 ?        Ss   08:11   0:00 sleep 10

Since sleep doesn’t exit immediately, the container is running still when you run docker exec.

Hopefully this is helpful.

2 Likes

essentially, yes, you need to make your container long running, even if its waiting forever - most times, I tail -f /var/log/something to keep it running - if i don’t have a service to run.

in @programmerq’s example above, your exec’d shell will exit whenever the container does - so in this case, when the 10 seconds are up.

1 Like

and then inspired by both of you - https://github.com/docker/docker/pull/9082

This is madness. I hope in the future there will be way to do it without any tail/sleep/commit. Or there is any valuable reason for not doing this possible?

Containers are indeed process-centric. Once that process exits, the container is done. having the ability to commit is docker’s way of taking the resulting state from that process and preserving it.

This is a key difference between containers and say, a virtual machine.

Would you be willing to share a more specific use case that you are after? All the echo, tail, and sleep commands were just examples that may not illustrate a proper use case very clearly.

Data volumes in docker may be helpful for whatever your use case might be, for example.

Say I fire up a process in a container which will have some resulting data stored on that volume. Now, I want to fire up another container and use that data, I just have to attach the same volume. Here’s another made-up example:

$ docker run -v /host/path/to/volume:/vol ubuntu /bin/bash -c 'date | tee /vol/result.txt'
Tue Nov 11 07:07:11 UTC 2014
$ docker run -v /host/path/to/volume:/vol ubuntu /bin/bash -c 'cat /vol/result.txt'
Tue Nov 11 07:07:11 UTC 2014

More information on volumes here: https://docs.docker.com/userguide/dockervolumes/

mmm, ok, There is a fundamental reason why docker containers work like this.

Containers are a set of configuration kernel ‘settings’ that are placed on the application you are running. This means that as far as the Linux kernel is concerned, the container doesn’t exist unless the container’s PID1 is running.

and for that reason, you can’t docker exec into something that doesn’t really exist.

a non-running container (in Docker speak), is really only a set of image layers and some configuration options that will be used when the main application runs.

normally, we’re using echo, sleep and tail as trivial examples - normally, you’d run an a web server as the container’s main process, or an application server, or something like that.

if you want a generalised container environment that you can run anything in - make the ENTRYPOINT a script that exec %ARG (or something like that), and docker run --rm -it myimage mycommand :slight_smile:

This is a key difference between docker containers and everithing else. LXC/OpenVZ, FreeBSD jails, Solaris zones are all more like VMs.
I trying to use Docker to contain “VMs” used for CI. Unit/functional tests are run in containers started from prebuilt docker images. Thanks to AUFS it starts very quickly, and I got test results in less then a minute.
In my case testing is doing in several separate stages, which are commands started in container. I should write each command’s output to separate file, and check exit status. So I should do commit after each command run, and run next command in new container, which is causes a little inconvenience.
Indeed I could use LXC+snapshots, but i loved Dockerfile.
Another option is to run ssh server inside container.
Thank you for your answers.

Hello,

One thing that some people do is run init inside a docker container. This does work, but if you can design a system that doesn’t need that, the community leans towards not running init/ssh/etc.

Here’s how I would approach using docker for CI of say, a Python application:

  1. My CI agent checks out the code as normal
  2. my dockerfile gets me an image with Python installed and my projects requirements setup (pip install -r requirements.txt)
  3. I set up a volume for the container that is the working directory of my checked out project on the CI host.
  4. I fire up the container as many times as I need to run my tests.
  • Any outputted files (like a junit-formatted unit test xml report) will stay on disk and be available to CI after the process completes and the container exits.
  • I still get return status information if I do not run my containers in daemon mode (don’t use -d)
  • I still get stdout output from the processes that I run in the containers when not using -d.
  • I can even connect up input with the -i option if I need to pipe anything in from outside the container to the process that I run containerized.

So for a normal python project, my CI system might do the following commands (more or less):

git clone git://foo/bar/path/to/project.git
cd project
docker build -t project_image .
docker run -v /path/to/ci/checkout/project:/project /bin/bash -c 'cd /project && python setup.py test'
echo $? # should be 0 when tests pass
docker run -v /path/to/ci/checkout/project:/project /bin/bash -c 'cd /project && python setup.py build'
echo $? # should be 0 when build successful
ls dist/project-0.1-py2.7.egg # this should exist because of the volumes

so I ran two containers, and used volumes in this CI setup to run two commands separately.

It isn’t quite exactly what you were after in your original question, but it is a real workflow that I have used for my CI projects in the past. I do get the advantages of fast build times thanks to using a dockerfile and having the intermediate images cached on the CI server. I can also swap in other dockerfiles if I want to test different things-- python 2.6, 2.7, and 3.4 for example. Maybe I throw in Jython and pypi too.

Hopefully this is helpful!

my CI approach is similar, but shorter

the CI system is set to run:

docker build -t this-test https://raw.githubusercontent.com/SvenDowideit/docker-perl/master/Dockerfile

this example of course won’t work everywhere because it needs a Docker daemon - but that too can be solved using a Docker-in-Docker (dind) setup.

in python (and I really should modify this perl one too) you’d ADD requirement.txt /data and then get pip to install them, before you ADD your other files, that way, your build won’t re-download the libraries unless your requirements change.

Using docker build and ephemeral containers will ensure that your builds are done from a fresh environment every time - whereas if you use long running containers or vm’s, you’re relying on your cleanup (though really that needs testing too)

Folks,

Sorry to join the party so late (after 251 days to be exact), but I simply could not resist! :smile:

Looks like what we want to achieve is this:

  • start a container
  • do something
  • exit
  • restart that same container
  • do something more
  • exit
  • ad infinitum

Yes?

The following sequence of commands works just great for me:

  • docker run -it --name mycont ubuntu bash
  • touch /tmp/file1
  • exit
  • docker start -ai mycont
  • touch /tmp/file2
  • exit
  • ad infinitum

Basically the switches -it to docker run and the switches -ia to docker start resolves the issue perfectly.

This does solve the issue, no?

-Joe

1 Like

Joe , thanks for that that really brought some clarity to some of my thoughts/questions :smile: !!!

I think your sequence works because your initial container was launched with a /bin/bash which is interactive by default.
When a container launch a task that is supposed to stop and therefore, not being interactive, then your way of doing things won’t work : You will arrive in a middle of a process (my exemple, a build) and if you stop it to do another task, the container will stop. End of story.

1 Like

But, since the initial run was with ‘bash’ as the command, you can start that same container again and get bash. You can even pipe commands in to stdin.
EG:

echo “/compile.sh” | docker start -ia mycont

A specific use case for wanting to do a “docker exec” in a non-running container is to do maintenance on a data volume container.

docker run -v /mydata --name mydata ubuntu /bin/false
...
docker exec mydata touch /mydata/foo   # doesn't work if not running :-(

As far as I can tell, once you’ve created and started a container with one shell, you can’t then restart it with a different one. That is, the container started with /bin/false is useless.

It doesn’t make sense to have a persistent daemon running just so that you can run docker exec

Making the executable process be /bin/bash and running with -i -t is not too bad. The container is then only running while you are maintaining it. You have to do a different command to maintain it (docker start -a -i), and if a second person or process wants to make changes to that container while it’s running then they have to use a different command again (docker exec). It would be much simpler if docker exec were able to use a stopped container.

Another option is to run a throwaway container every time you want to attach to the data container. In this case the data container itself could be entirely empty, as the temporary container would have the OS tools.

$ docker run --rm --volumes-from mydata -it ubuntu bash
root@645045d3cc87:/# ls /mydata
root@645045d3cc87:/# touch /mydata/foo
root@645045d3cc87:/# exit
exit
1 Like

Really this command its solved my problem: docker run -it --name mycont ubuntu bash so after that i execute: docker attach mycont
If not start the prompt then only press CTRL + C and then the prompt unhide