File access in mounted volumes extremely slow, CPU bound

(Submitted this via email, but since reports are happening on the forum now, I’m sharing here as well!)

Expected behavior

File access in volumes should be comparable to access times in non-volumes, similarly to Linux installations of docker, or docker on mac via docker-machine and VirtualBox.

Actual behavior

File access in volumes is many times slower than on non-volumes.


OS X: version 10.11.4 (build: 15E65) version v1.10.3-beta5
Running diagnostic tests:
[OK]      docker-cli
[OK]      Moby booted
[OK]      driver.amd64-linux
[OK]      vmnetd
[OK]      lofs
[OK]      osxfs
[OK]      db
[OK]      slirp
[OK]      menubar
[OK]      environment
[OK]      Docker
[OK]      VT-x
Docker logs are being collected into /tmp/20160330-152200.tar.gz.
Your unique id in bugsnag is: 4D76F500-A6CA-45D2-B18A-ABFFBF17071E
Please quote this in all correspondence.

See steps to reproduce for the simplest use case. The impact is that my team’s testing workflow involves, rather than building a new container via docker-compose for testing, simply running the node:4 container and mounting the current working directory as a volume and running tests. Executing common JS build tools like eslint, babel, and istanbul take an infeasibly long time compared to how quickly it ran using docker-machine over VirtualBox.

Steps to reproduce the behavior

  • Get on the commandline of a lightweight docker container, and mount a volume:
docker run --rm -it -v `pwd`:`pwd` -w `pwd` alpine /bin/sh
  • Write a few MB of data to a file on the volume, time it:
time dd if=/dev/zero of=test.dat bs=1024 count=100000
  • Notice the time is ~15 seconds, and during the operation the CPU usage of the docker process is 100%. For just over 7MB of data! Now cd / and run the same command. Notice the time is 0.19 seconds with no measurable CPU spike.

We are aware that file sharing is significantly slower than it needs to be to provide a native-like experience and are approaching the volume mount performance issue from multiple angles. We hope to have major performance improvements in the coming beta releases. Watch the changelog for news!

I recommend comparing our filesystem sharing performance to VirtualBox’s and VMWare Fusion’s performance as all of these applications are performing approximately the same actions to achieve the result. Additionally, I recommend using a larger block size than 1024 bytes as each block must traverse the hypervisor and file system daemon and then be acknowledged before the next block can be written. Right now, we have a block limitation of a little under 32k which we are currently working to increase. We are also working on improving file system access latency.

Thanks for your feedback on the Docker for Mac Beta!


Makes sense, thanks! Looking forward to the volume updates. To be clear, though, my initial comparison was against VirtualBox; I was actively using that setup as my docker-machine-driven dev environment before uninstalling and switching to Docker for Mac, and the speed difference was many multiples slower. I just shared the comparison I did because “Step 3: Uninstall Docker for Mac Beta and install docker-machine” seemed a little daunting :wink: But it sounds like you guys have a lock on it!

Thanks for the killer product :slight_smile:

I am seeing this issue, as well. Docker Machine backed by VirtualBox had the same problem for my team, but Docker Machine backed by VMware Fusion did not.

I don’t have any sort of metrics to provide at the moment, but the use case we’re using to test this is loading our company’s Rails application in development mode with the entire app mounted into the container as a volume. With VMware Fusion, it loads as fast as it does running natively on OS X. With VirtualBox and Docker for Mac, it takes several minutes to load the page and uses ~100% CPU. Looking in the Rails logs as it’s happening, I notice that asset generation seems to be the slow part. That is, each line in the following log snippet has a ~1 second delay after it before the next line appears:

rails_1 | Started GET "/assets/lib/jquery.idletimer.self.js?body=1" for at 2016-03-31 22:05:23 +0000
rails_1 | Started GET "/assets/lib/jquery.rails.self.js?body=1" for at 2016-03-31 22:05:25 +0000
rails_1 | Started GET "/assets/lib/jquery.ui.widget.self.js?body=1" for at 2016-03-31 22:05:27 +0000
rails_1 | Started GET "/assets/lib/jquery.queryparams.self.js?body=1" for at 2016-03-31 22:05:29 +0000

Of course, with this kind of performance, Docker for Mac is currently unusable for us, just as Docker Machine backed by VirtualBox is. Hope this is helpful, and let me know if there’s more specific information I can provide to help diagnose.

1 Like

@jimmycuadra Just for the record, what we ended up using was docker-osx-dev [1]. It allows you to work with Virtualbox, with instant file changes being rsynced to the docker host. It’s extremely convenient since it parses your docker-compose.yml and only syncs the shared volumes. Only drawback is that syncs only in one direction so you will have to add some scripts to “docker cp” if you want to bring back any changes done on your containers :slight_smile:

Looking forward for that performance improvement. Once that is fixed, it’s time to leave Virtualbox ! :slight_smile:

Thanks for this fantastic work !



I have the same issues with my team. Our Rails app is basically unusable with Docker for Mac beta due to the slow mounted-volume performance. I’ve worked around this issue with Docker Toolbox+Virtualbox by mounting the shared volumes via NFS. Unfortunately these are not usable solutions for our frontend NodeJS developers who require FS notifications.

I’m really looking forward to future updates improving the mounted-volume performance in the Mac beta.

1 Like

We have the same issue when mounting our grails app during development. It takes forever to compile. We ended up using docker-osx-dev as well.

Looking forward to the upcoming improvements to this great product.

After upgrading to beta6, this issue persists. My test suite for one of my microservices takes 100 seconds to execute under docker-machine on VirtualBox, and 238 seconds under Docker for Mac Beta. Underlying cause still seems to be CPU-bound disk access in the latter. The former does not experience this issue.

Wow, so in your case docker-machine w/ Virtualbox is faster than Docker for Mac Beta.

This is really concerning to me as it hasn’t been solved at any level. VMware performance is decent and Virtualbox performance is much better if you modify the underlying VM to use NFS. However none of these solutions offer a near-native solution. Really hoping the Docker team can solve this issue and offer near-native performance without unmaintainable workarounds.


Yep, for file access alone, VirtualBox is an order of magnitude or so faster for me. Lumped with all other processing in our test harness, we’re looking at a ~2.3x slowdown.

Important note, though, is that osxfs is not mounting /Mac on my containers and may not be working at all. If osxfs is used for volumes specified with -v on the command line, and falling back to some slower method if osxfs isn’t working, then that might be my underlying issue. Shot-in-the-dark guess on my part, though.

I’m going to run your test from the first comment and will report my findings!

My findings are similar to yours:

docker for mac beta:

time dd if=/dev/zero of=test.dat bs=1024 count=100000
real	0m 15.72s
user	0m 0.26s
sys	0m 0.63s

docker-toolbox w/ vmware:

time dd if=/dev/zero of=test.dat bs=1024 count=100000
real	0m 8.39s
user	0m 0.11s
sys	0m 1.79s

docker-toolbox w/ vmware, /Users mounted with NFS:

time dd if=/dev/zero of=test.dat bs=1024 count=100000
real	0m 1.30s
user	0m 0.02s
sys	0m 0.10s

Right now the only way my team can use Docker for our local Rails development without any major performance issues is to use Docker Toolbox w/ Virtualbox or VMware and NFS to mount /Users.

Out of curiosity, do you have a /Mac folder mounted in containers started with the mac beta?

No, just the /Users volume but that is because my pwd was /Users/username.

Interesting; according to the beta docs you should have /Mac as well. If it’s not too much trouble, could you test the speed in /Users to see if that’s using a different underlying mechanism than volumes?

Oh, scratch that, if you were using my command from earlier, that’s what mounted /Users for you – that means that you don’t have osxfs working either! It might be worth it to chime in over on this thread for that issue.

Keeping this thread up to date and accurate (sorry for the repeat posts): @cliffom confirmed that these performance tests are, in fact, running over osxfs in his post in that thread. So we’re back to square 1 with the speed issue here.

I’m a little surprised that run ... -v works for you despite not having osxfuse @tomfrost. I can’t get any remote mount to work (hence the other thread).

We discovered he does have osxfuse. Our issues are just inherent to beta performance issues.

I can add an extra data point here

OS X: version 10.11.4 (build: 15E65) version v1.11.0-beta6
Running diagnostic tests:
[OK]      docker-cli
[OK]      Moby booted
[OK]      driver.amd64-linux
[OK]      vmnetd
[OK]      osxfs
[OK]      db
[OK]      slirp
[OK]      menubar
[OK]      environment
[OK]      Docker
[OK]      VT-x
Docker logs are being collected into /tmp/20160411-163541.tar.gz.
Your unique id in bugsnag is: CAE02A72-67BA-4BAA-B36D-E47B034A921B
Please quote this in all correspondence.

I have the following mount/volume defined for the container


And this is the output of mount | grep osx in the container

osxfs on /var/www/html type fuse.osxfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,max_read=32741)

For the same test suite comparing execution speed for docker+osxfs vs Virtualbox+NFS (average of 10 runs):

Test suite 1:

  • virtualbox: 2538ms
  • osxfs: 5187ms

Test suite 2:

  • virtualbox: 230ms
  • osxfs: 3810ms

API endpoint response times (approximate times given)

  • Virtualbox: Sub 100ms
  • osx: 6000 - 22000 ms
1 Like