As discussed in File access in mounted volumes extremely slow, CPU bound, I’d still really appreciate a reproduction. One which generates a representative database from a script so we don’t have to transfer gigs over the net would be lovely. In particular, with large sequential reads of large blocks, you should see performance around 250MB/s which should transfer your 2GB file in around 8s. If you see performance significantly worse than that for this use case, either the database software is doing something suboptimal or there is a problem with osxfs
which may be addressable in the nearer term than “just make everything go faster”.
Thanks,
David