Nfs mount as storage for mongodb

Hi,

I created nfs mount point using below command.
sudo mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync host:/ /mnt/netcool

Starting the docker using the above storage for mongo

sudo docker run --name mongo-jg -v /mnt/netcool/data/mongodb:/data/db -p 27017:27017 mongo:3.6

I get below permission denied message, so i tried changing the owner and permissions to file still didnt work.

chown: changing ownership of ‘/data/db’: Permission denied

I added --user mongodb to above command and docker starts but fails after a minute with below error.

l $ sudo docker run --name mongo-jg -v /mnt/netcool/data/mongodb:/data/db -p 27017:27017 --user mongodb mongo:3.6

2019-05-09T12:11:19.621+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=a5be72050901

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] db version v3.6.12

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] git version: c2b9acad0248ca06b14ef1640734b5d0595b55f1

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] allocator: tcmalloc

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] modules: none

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] build environment:

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] distmod: ubuntu1604

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] distarch: x86_64

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] target_arch: x86_64

2019-05-09T12:11:19.622+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }

2019-05-09T12:11:19.632+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63873M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=“3.0”,require_max=“3.0”),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),

2019-05-09T12:11:49.977+0000 E STORAGE [initandlisten] WiredTiger error (22) [1557403909:977357][1:0x7fda43e1ca40], connection: __posix_file_write, 579: /data/db/journal/WiredTigerLog.0000000002: handle-write: pwrite: failed to write 128 bytes at offset 128: Invalid argument

2019-05-09T12:11:49.977+0000 E STORAGE [initandlisten] WiredTiger error (22) [1557403909:977423][1:0x7fda43e1ca40], connection: __log_fs_write, 212: journal/WiredTigerLog.0000000002: fatal log failure: Invalid argument

2019-05-09T12:11:49.977+0000 E STORAGE [initandlisten] WiredTiger error (-31804) [1557403909:977435][1:0x7fda43e1ca40], connection: __wt_panic, 523: the process must exit and restart: WT_PANIC: WiredTiger library panic

2019-05-09T12:11:49.977+0000 F - [initandlisten] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 376

2019-05-09T12:11:49.977+0000 F - [initandlisten]

***aborting after fassert() failure
2019-05-09T12:11:49.994+0000 F - [initandlisten] Got signal: 6 (Aborted).

Can someone help what could be the problem ?

Thanks,
Ahemad

You should not mount external folders into a database container. We see such problems again and again. Databases need immediate and high speed access to their data files, quite sure with your nfs mounts the delay is too big. For persistent data use a named volume.

Thanks tekki for response, we are using NFS mount on to HDFS so do we have any way to give named volume in case of hdfs?

You are correct, we are facing performance issues and sometime it hangs and doesn’t allow some operations on server.

I am looking for best practices and the parameters to pass in mount command to optimize the performance.

Any suggestions and examples will be helpful.

Thanks,
Ahemad

Thanks,
Ahe

Your logs indicate that you have a permission problem. Make sure the pid:uid of the mongodb process matches the uid:gid of your remote share. The defaul uid:gid for this image is 999:999. (overriding the ids with the --user parameter does not work for this image).

Anyhow, a filesystem with high latency and without proper file locking is a call for trouble when database data is tried to be stored on it.

The suggestion was to use local named volumes (as in: data on the disk of the node) to avoid using such an unsuiting filesystem (like nfs would provide…)

Thanks @meyay for update.
I was able to start the docker after adding NFS mount in /etc/fdtab but after some time I was not able to list the mount path in local machine.
Ls /mnt/netcool didn’t give any response, just hangs there. Not sure what could be the problem. Any suggestions ?

We are getting almost 1gb data for every 10 secs. So loading the huge data into local disk is problem for us as we have less disk in the machine. Do we have any other alternatives other than local and NFS mount to store which could help us to query the data using APIs ? Please advice.

Thanks,
Ahemad

The budget way is to add additional block device capacity on your nodes. Either a physical disk or an iscsi blockdevice.

A more robust and future proof approach would be do add a storage cluster (like ceph) to your environment and use a volume plugin as a connector to the storage cluster.

I personaly opted for StorageOS as volume plugin. It uses local blockdevice storage and takes care of replication and making data available whereever the container runs.

If all of this is not what you want: I hope someone else knows a better solution. As everyone is on the search for the holy grail when it commes to “how to handle persistance”.