File access in mounted volumes extremely slow, CPU bound

This project was excellent for Boot2Docker maybe it can be used on Docker for Mac?

Guys maybe you are missing the point of OSXFS, it has been developed mainly to overcome to the limit of network filsystems that don’t support propagation of filesystem events.

I have to agree with everyone else above, as it stands Volumes are far too slow for our PHP app development so I’ve had to put Docker back on the shelf for the time being in the hope that things will improve in the future.

you can just use NFS which works like a charm but it does not support the propagation of fs events, i can live without them as we are doing backend development.

I don’t want to flood this thread with messages that are going off topic, but is the process with NFS with DFM the same as Docker Machine do you know? Or can I just set the volume driver to NFS and I’m good to go?

@paolomainardi How about putting a NFS tutorial together? I’d help with that.

you are missing the point of OSXFS, it has been developed mainly to overcome to the limit of network filsystems that don’t support propagation of filesystem events.

I’d strongly argue that inotify propagation without usable filesystem performance is wasted utility if it can’t be used practically.

This project was excellent for Boot2Docker maybe it can be used on Docker for Mac?

This and other options have been discussed earlier in the thread. For this solution in particular, the downfall is that syncing is one-way. Applications that modify the “shared” volume don’t have their changes pushed back to the host filesystem. For example, running npm install for a node application doesn’t save modules between container runs.

It surprises me to see the current iteration in the release candidate phase, as a huge number of use cases simply aren’t supported by the filesystem problem at the moment. I don’t mean to sound like a broken record here, but this problem has had numerous attempts at a solution in the past, and still hasn’t been cracked. I’m not sure this is something the (very talented) Docker team can whip up in a few months.

The decades-old NFS, while being developed well before inotify/fsevents which were created in the last 10 years, is a kernel-level feature and extraordinarily performant. The only thing missing with that solution is the propagation of inotify/fsevents – and that is a very solvable problem.

Every day, my team uses dlite (docker implementation using NFS on xhyve) combined with my fork of fsevents-to-vm (fsevents/inotify forwarding daemon) for an extremely responsive docker-on-mac experience with full support for file change events. If osxfs were pivoted to be a more-native fsevents daemon on top of NFS (perhaps without using SSH, to reduce latency), these usability problems would be solved in the short term, and the world would have a much more user-friendly and maintenance-free version of that workflow.

Apologies for repeating myself, but at this phase with such an outstanding issue in RC1, I hope you’ll consider it!


As far as I understand the other issue that osxfx solves is the permissions issue. I was never able to get persistent local databases when using boot2docker due to permissions problems with NFS. Maybe this was solved?

I agree that the performance of Docker for Mac is still too bad to be used for development.

I don’t really see why it matters what problems osxfs is or isn’t solving.
For me it’s just too slow to use and I don’t know how to set up NFS with Docker for Mac. With docker-machine I’m using, maybe we need something equivalent if performance is not among osxfs’s goals.


It’s pretty interesting to read this topic. I had the same issue and instead of flaming around and asking for workarounds I rather asked myself “why is my app so hard dependent on disk IO?”

After some investigation and smaller changes for my app I gained a big performance boost with the beta by reducing (and avoiding) IO dependent operations which resulted in a pretty usable local environment:

  • before: average time per request ~30s
  • after: average time per request ~1s

Please also keep in mind it is still BETA and the docker-staff works actively on addressing this issue :slight_smile:

Shared volume performance is certainly a goal at the forefront of the Docker for Mac team’s mind. While they realize it has a long way to go in the current state, things have improved dramatically over the last few betas. At Dockercon @dsheets (hello!) was very candid in giving an overview of how things currently work and where they are headed. I’ve found the last 2 betas to be very usable in terms of shared volume performance without any change to my apps. It still isn’t at the point where I will get back to LA and roll it out to the rest of my software engineers but I’m excited to watch it improve and confident it will get there.


I’m not flaming around and I know why my app is dependent on disk IO when development mode is turned on. This is something I won’t be able to change without impairing my dev workflow.
I’m just trying to test the beta and document what issues I have.
For the time being I don’t think a workaround is bad because it would allow me to test the beta more thoroughly.

1 Like

This has certainly been my approach. Our workaround (setting the Docker Machine VM to use NFS) is being used by most of the engineering team while my environments have been on Docker for Mac full-time since beta 1.13 (the version that fixed the broken Node.js builds). I can’t wait until I can get the rest of the team moved over as it will simplify the documentation I’ve put together and remove 3-4 alias requirements :slight_smile:

Do you have instructions on how to do this?

I should have written a blog post so it is easily referenced but here are a couple of aliases I use (these assume you are using Virtualbox for your docker machine, the default machine, and your docker-machine ip is (which will have a host IP of

alias dockerUp='eval "$(docker-machine env default --shell bash)"'
alias dockerStart='docker-machine start default; docker-machine ssh default "sudo umount /Users; sudo /usr/local/etc/init.d/nfs-client start; sudo mount /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp"; sleep 1; dockerUp'
DOCKER_RUNNING=`docker-machine ls|grep default|grep Running`
if [ $? -eq 0 ]; then dockerUp; fi 

Run the following command to put the proper entry in /etc/exports:

echo "/Users -mapall=`whoami`:`id -gn` `docker-machine ip default`" | sudo tee -a /etc/exports
# /etc/exports should now have a line similar to '/Users -mapall=mclifford:staff'

You should now be able to start your NFS enabled Docker Machine in a new terminal with:


Wow, this thread blew up again :no_mouth: I just wanted to jump in here and contribute a workaround I came up with for our Rails projects. Basically, we just create symlinks for the primary high-write locations in the project. This doesn’t solve slower read access, but in my testing it’s really write speed that’s the big problem. Anyway, here’s a brief overview of what I did. I’m still a learning-to-be-a-dev ops guy, so hopefully what I’m doing here is both clear and applicable to (at the very least) your Rails projects.

Add to your entrypoint script

# Non-Linux development machines have abysmal file write performance in shared directories. This is a workaround
mkdir -p /tmp/project/log /tmp/project/tmp
ln -s /tmp/project/log log && ln -s /tmp/project/tmp tmp

Read vs. Write performance

# Write performance to host/container shared directory
root@96b8fdf0465d:/# dd if=/dev/zero of=/project/test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 26.0395 s, 3.9 MB/s
# Read performance from host/container shared directory
root@96b8fdf0465d:/# dd if=/project/test.dat of=/dev/null bs=1M count=1024 status=progress
99614720 bytes (100 MB, 95 MiB) copied, 1.02059 s, 97.6 MB/s
97+1 records in
97+1 records out
102400000 bytes (102 MB, 98 MiB) copied, 1.16331 s, 88.0 MB/s

@alanbrent :thumbsup: However, you can achieve the same performance gains by using a volume for tmp and mounting it to your apps tmp directory.

docker-compose pseudocode

version: "2"

        image: your_co/your_rails_image
            - .:/app
            - tmp:/app/tmp
        command: bin/start


I like that even better, thanks!

1 Like

Just got back from DockerCon. Was hoping from all the talks about osxfs that Docker for Mac beta would be usable for Rails development now. Unfortunately, it is not.

A simple example

time rake -T

real	0m46.552s
user	0m0.009s

Running Version 1.12.0-rc2-beta16.

That’s about 100x slower than running on Ubuntu. Here’s a repo that shows the issue:

Anyway, thanks for the great conference and for your work on this.

Same for me:

$ docker version
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Fri Jun 17 22:09:20 2016
 OS/Arch:      linux/amd64
 Experimental: true

Not a volume at all:

/tmp # time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
real	0m 0.37s
user	0m 0.02s
sys	0m 0.35s

Mounted volume via docker-compose:

/var/www/html/var # time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
real	0m 33.31s
user	0m 0.15s
sys	0m 2.55s

100x times!! Seriously?