All NFS mounts appear as read-only no matter how I configure it

I have system for syncing a number of services for me. To do this I download to /Downloads/Incomplete and once processed move to /Downloads/Complete for post-processing. To do this I have defined my volumes as:

data02Complete:
    driver_opts:
        type: "nfs"
        o: "addr=expeditor-nfs,nolock,soft,rw"
        device: ":/volume1/data02/Downloads/Complete"
data02Incomplete:
    driver_opts:
        type: "nfs"
        o: "addr=expeditor-nfs,nolock,soft,rw"
        device: ":/volume1/data02/Downloads/Incomplete"

And this may not be ideal even. The NFS server expeditor is a synology NAS, currently on a 1Gig connection.

The container volumes are mapped as this:

    volumes:
        - "data02Complete:/Downloads/Complete:rw"
        - "data02Incomplete:/Downloads/Incomplete:rw"

My uidgid.env file is defined in the container as:
env_file:
- uidgid.env

And in the uidgid.env file:
#
GID=1000
UID=100
UMASK=000

If I exec into the container and ‘touch /Downloads/Incomplete’ I get reported “read-only filesystem” but the host node has the same filesystem mounted and I can touch the file as root or as myself.

Any help?