Share folder between host and container

Hi there,

I have web server as a container and i want to share a folder on my host to my container. I know volumes, but volumes needs me to restart the container to refresh the content of the folder and i don’t want to lose files.

My concern is, is there any way to mount folder and share it between host and container so any incoming files from the users of my web server will be saved to the host directly without the need to restart the container to refresh and don’t lose data.

Also, i want the folder to be accessible from the container and the host, any updates on the folder from any side (host /container) will refresh the content and be accessible.

Thanks

Docker for Windows/Mac?

1 Like

it is a Linux operating system

That’s the better choice :slight_smile:

I know as in “I know they exist”? Because your further explanation doesn’t match my experience with volumes.

What you want can be done with a bind-mount, a volume backed by a bind-mount or a volume backed by a remote share. All of those will store data written inside the container to be written into the pysical target location and allow data written into the pysical target location to be read from inside the container.

This is only true if you got the access permissions right. For this you need to make sure the UID/GID of the user executing the main process in the container is the same like the owner of the folder/files on the host.

Some images provide a thing called users mapping, where you simply set UID and GID as environment variables and the entrypoint scripts make sure to change the UID and GID of the user to those. Other images run as a restricted user which expect that volume have a specific UID/GID in order to be able to access them…

Thank you for your support.
I am new to docker so i may don’t get things right. I can explain better what i need:

shared folder:
My_host/path/folder -> container/path/folder

1- User uploads a file to the container: container/path/folder/image.png

i want this file to be stored as:
container/path/folder/image.png
and
My_host/path/folder/image.png

2- Whatever happens to the container (stop,restart,remove,…) i don’t want to lose My_host/path/folder/image.png in any circumstances

I am not sure how this would contradict with what I wrote in my first response…

Maybee you want to share how you actualy create your container?

Also when user uploads image.png, i need the folder to be auto updated on the host and container without the need to restart.

I know, but would docker volumes help me with that? or do i need to take a look into bind mounting

i build the container using Plesk

it realy doesn’t make sense to me to respond to someone who ignores what I write…
I am not sure if I am the right person to help you. Good luck on your journey!

take it easy. As i told you i am new to docker. Good luck.

accer,

Docker has the ability to keep data on the host even during restarts and upgrades. This is called bind mounts. You can read about it here: https://docs.docker.com/storage/bind-mounts/. Using the example in the documentation you could use the following command to bind the app directory to a subdirectory target in current local directory where you run this command. Any data in the subdirectory gets automatically added to the container at startup and any changes in the running container gets added to your directory. I believe this is what you are looking for.
Ex:

docker run -d \
  -it \
  --name devtest \
  -v "$(pwd)"/target:/app \
  nginx:latest

Good Luck!

1 Like

Without knowing how you are starting your containers it’s difficult to be specific but what @meyay said is what you want. Perhaps an example will help:

mkdir -p /var/data/html
echo "Hello from Docker" > /var/data/html/index.html

docker run -d -p 8080:80 -v /var/data/html:/usr/share/nginx/html nginx

With the above command, any files that you place in /var/data/html on the host will show up in /usr/share/nginx/html inside the container and visa versa. You can change files from within the container or from the host it doesn’t matter because they are both pointing to the same volume on the host. No need to restart the container. Try it and see. If you edit the /var/data/html/index.html file on the host and go to http://localhost:8080/ you will see the changes reflected in the container.

This is how you would do web development with Docker. You edit the files on your workstation with your favorite editor and they get served up inside the docker container live. I do this all the time.

Could it be that your web service (whatever it is) is caching the requests so it’s not seeing the changes?

1 Like

Don’t forget that there is the Docker Container CP command, which allows one to copy files between host and container.
A plan could be to copy required files out to the host, then launch another instance of your image, and in the run command (using a different port mapping to 80, so that you can run the 2 containers in parallel), mount the location of the files on the host to the desired location in the container. This will allow you to test, without downing the ‘Prod’ instance of your web site.
Once testing successful, arrange downtime to re-run deploy image with new run commands, etc.

I’m not very sure if you already found your solution. As far as I understood you started a container without sharing. You already know how to share BUT you have content in that container that you don’t want to be lost.

As charliecurtis says, there’s the “docker cp” command to transfer info to and from the containers from and to the host.

I’d say:

1 - Stop your container.
2 - docker cp your-container:/my/files/in/container/* /my/nice/host/folder
3 - Run another different container from the same image, this time mounting /my/nice/host/folder in /my/files/in/container and there you are: The container will look pretty much the very same, and you’ll have the files from now on inside and outside the container, even the old files created before the change.

Here’s the docker copy syntax: https://docs.docker.com/engine/reference/commandline/cp/

If you feel so, charliecurtis’d advice of running a parallel instance in another port just “to test” is a very good point. If you are so new to docker that you don’t feel launching two parallel instances in different ports, go straight with what I say but do not rm the stopped container. In case of failure you still can kill the new one and re-start the one you stopped.

This will have brief downtimes: The time you “copy” the files while the container is stopped. If you need zero-downtime let us know. I don’t like the idea of copying while the container is started because while running, and after copying the container could have written more files and those would be lost.

If you are happy with a “brief” downtime, stop+copy+re-run-mounted. If you need 100% uptime because you have milions of users, let us know and we’ll try to help.