$ docker run --name cont3 ubuntu echo foo
foo
$ docker start -a cont3
cont3
foo
$ docker start -a cont3 echo bar
2014/11/07 19:57:22 You cannot start and attach multiple containers at once.
$
$ docker run --name mycont4 ubuntu echo one
one
$ docker exec mycont4 echo two
2014/11/10 13:24:06 Error response from daemon: Container mycont4 is not running
$ docker start mycont4
mycont4
$ docker exec mycont4 echo two
2014/11/10 13:24:24 Error response from daemon: Container mycont4 is not running
$
Ah, I think I see what is happening. In your example, the echo one command exits immediately, which stops the container.
docker exec only works with currently running containers.
If you want to take the resulting image, and run another command, you will need to commit that and start another container.:
$ docker run --name mycont4 ubuntu echo one
one
$ docker commit mycont4 myimage5
015b3a3d3844b0c3638ab0e07eabd6b2ccdd1768667bc8635051488c6dcec087
$ docker run --name mycont5 myimage5 echo two
two
Since the echo example might be skewing a bit, docker exec may still work for you. For example:
$ docker run --name mycont4 ubuntu sleep 10 # let this exit
$ docker start mycont4 $ this starts sleep 10 back up in the background
mycont4
$ docker exec mycont4 ps faux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 8 0.0 0.1 15568 2112 ? R 08:11 0:00 ps faux
root 1 0.7 0.0 4348 648 ? Ss 08:11 0:00 sleep 10
Since sleep doesnât exit immediately, the container is running still when you run docker exec.
essentially, yes, you need to make your container long running, even if its waiting forever - most times, I tail -f /var/log/something to keep it running - if i donât have a service to run.
in @programmerqâs example above, your execâd shell will exit whenever the container does - so in this case, when the 10 seconds are up.
This is madness. I hope in the future there will be way to do it without any tail/sleep/commit. Or there is any valuable reason for not doing this possible?
Containers are indeed process-centric. Once that process exits, the container is done. having the ability to commit is dockerâs way of taking the resulting state from that process and preserving it.
This is a key difference between containers and say, a virtual machine.
Would you be willing to share a more specific use case that you are after? All the echo, tail, and sleep commands were just examples that may not illustrate a proper use case very clearly.
Data volumes in docker may be helpful for whatever your use case might be, for example.
Say I fire up a process in a container which will have some resulting data stored on that volume. Now, I want to fire up another container and use that data, I just have to attach the same volume. Hereâs another made-up example:
$ docker run -v /host/path/to/volume:/vol ubuntu /bin/bash -c 'date | tee /vol/result.txt'
Tue Nov 11 07:07:11 UTC 2014
$ docker run -v /host/path/to/volume:/vol ubuntu /bin/bash -c 'cat /vol/result.txt'
Tue Nov 11 07:07:11 UTC 2014
mmm, ok, There is a fundamental reason why docker containers work like this.
Containers are a set of configuration kernel âsettingsâ that are placed on the application you are running. This means that as far as the Linux kernel is concerned, the container doesnât exist unless the containerâs PID1 is running.
and for that reason, you canât docker exec into something that doesnât really exist.
a non-running container (in Docker speak), is really only a set of image layers and some configuration options that will be used when the main application runs.
normally, weâre using echo, sleep and tail as trivial examples - normally, youâd run an a web server as the containerâs main process, or an application server, or something like that.
if you want a generalised container environment that you can run anything in - make the ENTRYPOINT a script that exec %ARG (or something like that), and docker run --rm -it myimage mycommand
This is a key difference between docker containers and everithing else. LXC/OpenVZ, FreeBSD jails, Solaris zones are all more like VMs.
I trying to use Docker to contain âVMsâ used for CI. Unit/functional tests are run in containers started from prebuilt docker images. Thanks to AUFS it starts very quickly, and I got test results in less then a minute.
In my case testing is doing in several separate stages, which are commands started in container. I should write each commandâs output to separate file, and check exit status. So I should do commit after each command run, and run next command in new container, which is causes a little inconvenience.
Indeed I could use LXC+snapshots, but i loved Dockerfile.
Another option is to run ssh server inside container.
Thank you for your answers.
One thing that some people do is run init inside a docker container. This does work, but if you can design a system that doesnât need that, the community leans towards not running init/ssh/etc.
Hereâs how I would approach using docker for CI of say, a Python application:
My CI agent checks out the code as normal
my dockerfile gets me an image with Python installed and my projects requirements setup (pip install -r requirements.txt)
I set up a volume for the container that is the working directory of my checked out project on the CI host.
I fire up the container as many times as I need to run my tests.
Any outputted files (like a junit-formatted unit test xml report) will stay on disk and be available to CI after the process completes and the container exits.
I still get return status information if I do not run my containers in daemon mode (donât use -d)
I still get stdout output from the processes that I run in the containers when not using -d.
I can even connect up input with the -i option if I need to pipe anything in from outside the container to the process that I run containerized.
So for a normal python project, my CI system might do the following commands (more or less):
git clone git://foo/bar/path/to/project.git
cd project
docker build -t project_image .
docker run -v /path/to/ci/checkout/project:/project /bin/bash -c 'cd /project && python setup.py test'
echo $? # should be 0 when tests pass
docker run -v /path/to/ci/checkout/project:/project /bin/bash -c 'cd /project && python setup.py build'
echo $? # should be 0 when build successful
ls dist/project-0.1-py2.7.egg # this should exist because of the volumes
so I ran two containers, and used volumes in this CI setup to run two commands separately.
It isnât quite exactly what you were after in your original question, but it is a real workflow that I have used for my CI projects in the past. I do get the advantages of fast build times thanks to using a dockerfile and having the intermediate images cached on the CI server. I can also swap in other dockerfiles if I want to test different things-- python 2.6, 2.7, and 3.4 for example. Maybe I throw in Jython and pypi too.
this example of course wonât work everywhere because it needs a Docker daemon - but that too can be solved using a Docker-in-Docker (dind) setup.
in python (and I really should modify this perl one too) youâd ADD requirement.txt /data and then get pip to install them, before you ADD your other files, that way, your build wonât re-download the libraries unless your requirements change.
Using docker build and ephemeral containers will ensure that your builds are done from a fresh environment every time - whereas if you use long running containers or vmâs, youâre relying on your cleanup (though really that needs testing too)
I think your sequence works because your initial container was launched with a /bin/bash which is interactive by default.
When a container launch a task that is supposed to stop and therefore, not being interactive, then your way of doing things wonât work : You will arrive in a middle of a process (my exemple, a build) and if you stop it to do another task, the container will stop. End of story.
But, since the initial run was with âbashâ as the command, you can start that same container again and get bash. You can even pipe commands in to stdin.
EG:
A specific use case for wanting to do a âdocker execâ in a non-running container is to do maintenance on a data volume container.
docker run -v /mydata --name mydata ubuntu /bin/false
...
docker exec mydata touch /mydata/foo # doesn't work if not running :-(
As far as I can tell, once youâve created and started a container with one shell, you canât then restart it with a different one. That is, the container started with /bin/false is useless.
It doesnât make sense to have a persistent daemon running just so that you can run docker exec
Making the executable process be /bin/bash and running with -i -t is not too bad. The container is then only running while you are maintaining it. You have to do a different command to maintain it (docker start -a -i), and if a second person or process wants to make changes to that container while itâs running then they have to use a different command again (docker exec). It would be much simpler if docker exec were able to use a stopped container.
Another option is to run a throwaway container every time you want to attach to the data container. In this case the data container itself could be entirely empty, as the temporary container would have the OS tools.
$ docker run --rm --volumes-from mydata -it ubuntu bash
root@645045d3cc87:/# ls /mydata
root@645045d3cc87:/# touch /mydata/foo
root@645045d3cc87:/# exit
exit
Really this command its solved my problem: docker run -it --name mycont ubuntu bash so after that i execute: docker attach mycont
If not start the prompt then only press CTRL + C and then the prompt unhide