Copy Directory From Container on Exit

This is someway to copy a directory out of a container after the command finishes.

For example, if my container does a build, it would be nice to be able copy the artifacts out after the build completes.

I know I could I use -v to mount a host directory in the container, but, by default those files as annoyingly written out as root.

It would be nice to have an option on the docker run to copy files from the container to host after the command completes.

Thanks!

I’m not sure if you are using Docker Desktop for Linux, but I moved the topic to DockerEngine since your question has nothing to do with the Desktop.

Can you explain what your problem is with the permissions? You can set the permissions of the source directory and in most cases you need to, so I don’t see the issue.

Usually a volume is the best way. You don’t even need to work on a volume directly if you don’t want to, just move the final build to the volume.

The alternative would be docker cp what how would you notify the host to run it? You could do it manually but I don’t think it is necessary.

I guess you are using some kind of CI/CD container (like concourse) that works with artifacts and you want the final build to be copied out from the wrker container. Your CI/CD tool should have ways to deploy the build remotely and your host machine could be the remote.

If it is not a CI/CD tool that runs in a container and you just want to run your own custom build in a container, you could create a Dockerfile and use that for the build. docker build command supports the --output option to save the built file to the host instead of creating an image.

Thanks for moving this to the correct forum.

We have GUI application that calls our program. Our program generates a bunch of files that are read back into the GUI.

We would like to containerize our application. If we mount the output directories then the files are written out as root. This is different than the native behavior, where they would be written with the user’s id (and group).

Since the files are written as root, a simple rm file won’t remove a file, you need to do a rm -f file.

It’s too bad docker run doesn’t have an --output option, because is exactly what I’m looking for. Using docker build isn’t great because multiple users may be using the image.

I did figure out a hacky solution (aside from rootless docker). I wrap the call the docker run in a script; that script passes the current user’s user and group id to an EntryPoint. That EntryPoint creates user in the container with the same user and group id, then executes the command in the container as that user so the files are written out with the proper user and group id. But, you know, a bit hacky.

Docker was designed for isolation. There are some other container technologies like Singularity CE that has a minimum isolation only by default, so your user in the container would be the same as outside. Although it was designed for supercomputers, you can convert Docker images to Singularity images if the application supports how Singularity CE works. Singularity CE even mounts your home folder by default.

If you use Docker, you need to set the user id and optionally group IDs for the container.

docker run --rm -it --user "$(id -u):$(id -g)" --group-add $(id -G | sed 's/ / --group-add /g') bash id

Possibly you wrote something similar though.

You don’t necessarily need to create a user with that ID in the image or container unless the application needs to work with the home folder of the user or any other property of the user even if that is just the username itself.

I don’t see how an --output option would be used for a container. The option for docker build basically copies the entire filesystem out from the docker image. It is useful when you copy a single binary or other kind of data toa build stage that is “FROM scratch”, but it wouldn’t be useful for a container unless you can also set the folder which you want to copy out which is basically a bind mounted folder so wouldn’t help.

You can still try to suggest a new feature in the roadmap explaining exactly what you would like to see: Issues · docker/roadmap · GitHub

Until then depending on what files you need to generate you could also use the standard output stream to write data and redirect it to a file.

docker run ... imagename appname > mydata.zip

It helps only if you run the container every time you need to generate a file, but not when you run the container in te background indefinitely.

As a last note, I would add that not all applications can or should be containerized without changing the application itself.

I hope at least what of my suggestions help you.