The “service create” command description claims that
“… Data in named volumes can be shared between a container and the host machine, as well as between multiple containers.”
But how can a process running on the host (outside any container) access these data efficiently and safely?
Surely, there is docker cp but this would duplicate the amount of occupied disk space. In my case, a read-only access from the host side would fully suffice, and the container itself where the data stem from is also run in read-only mode. So it would be a pity to waste the storage; besides this, some mechanism must take care of repeating the copy process every time a newer image version has been pulled (the upgrades are supposed to happen on regular base, but not on every invocation of docker run).
Another “workaround” would be to parse the output of docker inspect for mount points and issue a mount --bind command, but this also has two drawbacks. Firstly, it requires sudo allowance extended to the mount command, which might be unacceptable for prospective users of the image from the security perspective; secondly, a manual mount would conflict with docker run --rm=true because the volume removal would fail due to busy mount point.
So I wonder whether there is an elegant and safe way to mount the data during the container lifetime? Maybe there is a secret API call just not attached to the CLI yet?
Thanks in advance for any suggestions - or for accepting this as a feature request
Sounds like you need to create a volume, mount the volume to the host machine and mount it to the container as well. The process running on the host could access the volume mount directory. Of course, the volume itself could not handle the concurrent accesses. Your processes on the host and container need to handle it.
The host does not have the folder initially. The user pulls a certain image and starts a container. The image contains a volume where some folders should be visible to processes running on the host.
I’m afraid I can’t follow you, or I have described my problem not clear enough.
How can a volume defined within an image suddenly become an NFS share? The image I’m speaking about is not supposed to run an NFS server inside, it’s a pure application which is perfectly happy without any extra privileges.
docker run --volume,type=bind,source=/path_in,dest=/container_path ... etc
or
docker run -v path_in:container_path... etc
you do NOT need a VOLUME definition in the dockerfile for this to work.
you can also do
docker volume create .....
which will create a symbolic name for the host side of the volume, and then use THAT name as the /path_in on the above syntaxes…
all this is in the doc, I have used -v for years… I mount a folder with a 1.3 gig tar file and then untar it into the container, at /opt and then execute the app from the /opt folder… keeps the container small, and makes it easy to change the software level with a new tar file…
Surely this is in the doc but if this would help me I wouldn’t bother anybody in this forum. I don’t want to transfer data from the host into the container, I need the opposite flow, from the container to the host mount namespace, but actually avoiding physical copying. This should happen on somebody’s host who just pulled the image from DockerHub and should not need to download extra data from other sources.
unless specified the volume is read/write to the container… on the -v method :ro at the end protects the host from the container…
unless they explicitly use the -v or --volume option the container does NOT have access to the host flesystem
the image can declare as may ‘volumes’ as it wants… but no auto mount will occurr
if the container WRITES to the read/write volume directly (as compared to copying from another location inside the container, it all looks the same to the fiesystem
In order to write something there must be some source data, and this source data lives in the image too, therefore whatever will be written becomes a second (superfluous) copy. If it would be possible to let the docker daemon bind-mount the btrfs subvolume (created for the image volume anyway) directly into the host namespace, there wouldn’t be any second copy. But apparently it’s not supported as of now.
i assumed your app in the container is processing its internal data and wanted to output its results such that the host could do something with it…in this case, no copy
No, it’s about various resource files shipped together with the app but aimed to be used by other programs running on the host directly, e.g. for post-processing the immediate results. These resources can be quite voluminous. Another bad thing about explicit copy is that when somebody stops using the containerized app and removes the image, they will still have a pile of useless files left behind in their home folder which have to be removed manually.
What I’m actually wondering is that while it’s so easy to share mounted volumes between two containers (just start the second one with --volumes-from=first), there is no possibility to do the same with the host system. After all, it’s just another mount namespace, what’s the difference?