The “service create” command description claims that
“… Data in named volumes can be shared between a container and the host machine, as well as between multiple containers.”
But how can a process running on the host (outside any container) access these data efficiently and safely?
Surely, there is
docker cp but this would duplicate the amount of occupied disk space. In my case, a read-only access from the host side would fully suffice, and the container itself where the data stem from is also run in read-only mode. So it would be a pity to waste the storage; besides this, some mechanism must take care of repeating the copy process every time a newer image version has been pulled (the upgrades are supposed to happen on regular base, but not on every invocation of
Another “workaround” would be to parse the output of
docker inspect for mount points and issue a mount --bind command, but this also has two drawbacks. Firstly, it requires sudo allowance extended to the mount command, which might be unacceptable for prospective users of the image from the security perspective; secondly, a manual mount would conflict with
docker run --rm=true because the volume removal would fail due to busy mount point.
So I wonder whether there is an elegant and safe way to mount the data during the container lifetime? Maybe there is a secret API call just not attached to the CLI yet?
Thanks in advance for any suggestions - or for accepting this as a feature request