Be able to access container filesystems on disk from the OS X host
Containers’ files are stored in the qcow2 file which is difficult to access.
I have some containers built that include Python environments which work well to host my apps.
I want to be able to point my editor running on my Mac at these Python environments for code linting, tab-completion etc which requires access to the files within the python environment in the container, to see the modules installed within that environment. However I can’t access them as they are stored in the qcow2 copy on write file used by the docker for mac VM. My editor doesn’t run in a container, so I can’t access them via share volume etc.
docker cpto copy the environment out of the built container- bad because this is a manual step that has to be done every time the docker env changes, doesn’t guarantee the env your dev tools are using is the one the container is
- Somehow mount the qcow2 file locally? This sounds like a bad idea and seems non-trivial
- Mount a local dir at container build time that is used to store the environment, which is then used by the dev tools and the container via a mount. Bad because this is requires 3rd party tools like rocker, and mounting things at build time seems to be unpopular for predictability reasons which is fair.
- Run a network file system server within the container or another one with access to a shared volume and share it over NFS / SSHFS etc. This feels pretty horrible, adds another layer, requires mounting the FS everytime the container starts etc.
Do I have any other options other than these? If not is there any work to be done to allow native file access to the container file systems from a host, via a ‘reverse mount’ option at container runtime or moving away from the qcow2 file and just storing files on disk etc?