Dockerize old App - User Files Storage managment

Hi guys,

I work with docker since 3 years and love all container things.
I am facing a problem that i couldn’t figure it out.

I have an old production webserver with php apps (VPS) (custom online shop, wordpress,…). Some apps are still under development (fixes, new features) and i started dockerized all this apps.

I have an other VPS, fresh installed with docker and docker compose. I want to transfer all the dockerized apps to this server.

For the apps that are still under development, i would like to use CI/CD. The idea is to build, copy project files into an image and deploy/update the new version of the containerized app.

I would like to understand how to manage files that are created by users (like uploaded profile images). I don’t want to use an external data storage (like S3) and want to keep it all on the new server.

Is there a best practice to manage user files ?

For example, do i need to create a volume only for user files (shared folders) ? So when i update the docker image, i could persist the files between images.

I hope you understand what i mean.

Thanks for your help

Nobody have an idea… ?

I guess you created your topic on a day with many posts and no one felt compelled or able to respond to your question, while it was still visible in the “new topics” list.

Like you wrote on your OP: using volumes to store persistent data is the way to go forward.
I guess “i could persist the files between images” refers to when you re-create a container based on a new version of an image.

That’s the sole purpose of volumes: when mapped into a container path, data written into that path is persisted outside the container and therefor survives a container replacement (as long as it mounts the volume into the same container path)

If you just run a single node, you could also use a bind (=hostpath) instead of a volume (=docker managed filesystem storage). It is also possible to declare a volume that uses a bind :slight_smile:

1 Like

Thanks @meyay for your reply.

I guess on a swarm cluster with multiple nodes, i have to use a shared volume betwwen nodes.
What about performance on accessing/writing files on a shared volume ?

Maybe in that case, an external storage is preferable ?

Thanks

Can you elaborate on what you mean by “shared volume between nodes” and “by external storage”?

You mean a docker volume backed by a remote share like nfsv4 ? If this is the case, then yes.
Note: nfsv4 is recommended over nfsv3 or cifs, as It’s known to be less problematic.

Note: the local driver (=default) for docker volumes creates locally managed volume declarations, regardless whether they point to local drives or to nfs/cifs remote shares. If you do not deploy your stacks with docker stack deploy you will need to create the volume on each node a container can potentially be started on. Docker has no own global scoped volume driver that would take care of it on it’s own. Stacks on the other hand take care to create the local volumes on a node when a container requiring them is started the first time on the node.

Not sure what to say here, as it depends on factors like the access patterns of your applications, the network connectivity between nodes and remote share, the used disks and their performance (iops/transferrate), and whether your remote share uses caches to offload file reads.

@meyay

Yes i mean “nfs” for “shared volume between nodes” and “S3 like” for external storage.

Thanks, it’s more clear in my mind now !

Do have a good app to manage NFS to recommend ?