I’m currently integrating a CIFS-based NAS into my Docker environment and have encountered two primary methods for mounting:
Bind Mounts: Mounting the CIFS share on the host system and then using a bind mount to link it to the Docker container.
Docker Volumes: Directly configuring the CIFS share as a Docker volume using the local driver with appropriate options.
I’m seeking insights on the following aspects:
Resilience: Which method offers better handling of temporary network disruptions or NAS unavailability? Specifically, how does each approach manage reconnections or maintain data integrity during such events?
Performance: Are there notable differences in I/O performance between the two methods? For instance, does one approach introduce more latency or overhead compared to the other?
I would appreciate experiences, benchmarks, or references that could guide me in choosing the most efficient and robust method for this setup.
Both approaches use the same things under the hood:
When you use a named volume backed by a cifs remote share, it will be mounted when the first container using it is started, and is stopped when the last container using it is stopped. It uses the os’es mount command, thus: everything will be the same, except the point in time when the cifs share is mounted/unmounted and the location it is mounted to.
Thanks for the explanation @meyay. So, in terms of stability, should both approaches behave the same way, right?
For example, if the NAS temporarily disconnects (due to an update or any other reason), I understand that with a bind mount, the system would continue writing to the physical directory on the host. However, in the case of a Docker volume backed by CIFS, would it behave the same way, or would the container fail due to the missing volume?
Lets assume you have mounted your cfs share to /mnt/cifs/myremote. If you bind /mnt/cifs into a container path, and use mount propagation so that the container is able to see changes on the mountpoint, then yes it would write into filesystem. In this case the container would even start if the remote share is not mounted.
Though, if you bind /mnt/cifs/myremote into the container path, and the remote share gets unmounted, then the process inside the container will not hang and will get an error. Depending on how the process deals with this error, it either just logs the error and does nothing, or could even terminate the process. If it terminates the process, and your container was created with a restart policy other than none, a new container would fail to be started, until the remote share is available again (this is the preferred solution). This is basically the same behavior as you have with a named volume.
Note: docker does mount host paths using their inodes. If you mount a remote share into a folder it’s inode changes. The same is true if it gets unmounted and remounted again → different inode.