Hello everyone. I’m new to Docker and I’m having trouble getting a container I made to hold data. I’ve written a Dockerfile that installs software from a github page. My goal is to have the datasets downloaded to a folder inside the container to be linked to a folder in the home directory so the user won’t have to download the same data repeatedly.
I know there are ways to do this with volumes and bind mounts, but I haven’t been able to get them working. Ideally, I would like to add a command to the Dockerfile that will already have the container set up to the folder on the host OS.
You should write a shell script or something similar that does the download to the user’s home directory, and ignore Docker for this task.
Anyone who can run Docker commands has unrestricted root access on the system. Using Docker to package simple maintenance scripts on a multi-user system is a big security problem, and it’ll often be much easier to just run the script than to wrangle with Docker command-line options and bind mounts and user remapping.
If the goal of your process is to affect something on the host system (download content into the filesystem, set up network interfaces, manage hardware devices, …) then it’s best to not have a layer like Docker in between that tries hard to hide these details from you.
This is, by design, impossible. An image should be able to run on any system independent of what’s on the host.