We are dockerizing bitbucket server. We have used RHEL host to install docker engine and using alpine as container image ( as atlassian themselves provided in official docker image)
Now for Bitbucket_home, we have chosen NFS folder instead of local folder. The NFS is exported at /nas/data in the host machine.
Now we plan to create docker volumes from these NFS folder and use them in the container.
My docker-compose.yml looks like
```
version: '2'
services:
bitbucket-test:
image: privaterepo/bitbucket-ssl:5.15.1
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
environment:
NAS_PATH: ${nas_path}
NAS_DOMAIN: ${nas_domain}
NAS_LOGIN: ${nas_login}
NAS_CREDENTIALS: ${nas_credentials}
JDBC_DRIVER: ${jdbc_driver}
JDBC_URL: ${jdbc_url}
JDBC_USER: ${jdbc_user}
JDBC_PASSWORD: ${jdbc_password}
ports:
- "8443:8443/tcp"
- "7999:7999/tcp"
volumes:
- type: volume
source: /nas/data
target: /opt/bitbucket
volume:
nocopy: true
labels:
io.rancher.scheduler.affinity:host_label: bitbucket_host=true
io.rancher.container.pull_image: always
stdin_open: true
tty: true
```
My questions is if we run multiple containers on the same host, nothing is preventing other containers from actually creating a volume from /nas/data folder.
How can we make multi containers run on same host while securing NFS folder?
Thanks in advance.