File access in mounted volumes extremely slow, CPU bound

i confirm that NFS for a drupal/symfony project is a :thumbsup:
The problem that docker team is trying to solve here is the live migrations of fs events from osx to linux and back, that’s why they choose to start with the ambitious osxfs project.


The Drupal details are a bit off topic, but drush cc all / drush cr in our current setup using the official drush/drush:8-php5-alpine with the web-root mounted via a volume executes in about 20 seconds. We use the opcache with plenty of cache to be able to hold everything in memory.

The 20 seconds could be way better - but it is still well within what I will accept for getting all the advantages Docker gives me.

OT: drush “cc all” it’s just a matter of database tables, few files are involved, so cannot be used as a real benchmark.

I admit that danquah is right about drush / drupal is off offtopic, still i want to correctyou paolamainardi. If you really do not really know the facts, do not spread those informations. drush cc is a full drupal bootstrap mode, thus reads every single module file, calls all those 5000 file_exists calls and all things which do make drupal-stacks being so sensible to poor read performance.

drush cc all is a perfect benchmarks. It uses a lot of reads and also writes ( caching ).

@eugenmayer you are quite wrong, but this is not the place to discuss about drupal and drush, just make me reply that the “perfect” benchmark to me it’s something that just stress the filesystem, the clear cache process involves 99% of its time on database and cpu. The “perfect” benchmark it’s maybe a drush script which trigger just the registry rebuild process io-wait intensive + autoloader bootstrap and/or config files reload (D8), any database process should be involved to have a consistent test case.

last comment on this @paolomainardi. You might want to understand the meaning and implications of this before you really stay on your opinion. Even though the read-heaviness is different for D8 in D7 ( in favour to D7 ) in favour to D8, its still a good rule of thumb.

The only reason i am telling this is, that people do not judge drush cc all as “non useful benchmark” - its quiet ok and the results are pretty useful.

Again, drush cc all involves a lot of io-wait with a lot of database with a lot of network calls, so it can’t be now and never a good filesystem benchmark candidate, as simple as that.

@dsheets Glad I could help and thanks for the update.

Docker team, please HELP :slight_smile:

@dsheets Thanks for your informative post. But it didn’t mention anything about the host being CPU bound during file system access at all. I can live with osxfs file system access being somewhat slower than native, but when filesystem access locks a CPU core at 100% on the host it makes it difficult to run other applications on the host during development or other containers. Even if file system access was fast (near-native), if the host is locked at 100% to do it, that’s far from ideal.


For what it’s worth, after upgrading to the latest stable version of docker for mac, I’m still experiencing the slowness.

Can anyone else confirm?

1 Like

I just tried the released “Docker for Mac” bits. Everything else is fine but I am finding that just a simple UNIX find operation on a mounted file system is 10-15 times slower. This is a showstopper for me, back to my previous solution…

1 Like

Are you using a volume in your dockerfile? When I do that, it’s incredibly slow. I’m using other solutions to fix it (docker-sync mostly)

Many of the performance improvements that you’re talking about have been implemented a long time ago by NFS. While I understand that NFS certainly has it’s drawbacks, would it be possible to offer NFS as an alternative that users can switch to while the issues in osxfs are ironed out? Despite @eugenmayer’s assertion that NFS is too slow to be useful, I’m quite happy with it in most of my Vagrant environments. I work on large Drupal sites, so there’s some slowness, sure, but it’s certainly not intolerable, and I’d consider it fast compared to osxfs right now. No offense intended - I know you’re working on it - but that’s what I’ve observed.

More broadly, I’m curious about why NFS (or some other existing/proven project) wasn’t chosen for the base to build on here. If that were the case, the only custom bits that would need to be built would be the event propagation from host -> vm and maybe some caching trickery to speed things up in the VM.

The biggest problem with NFS (for me) is that you don’t get fs events over the mount. You’d still need something else that can propogate those. (As your second paragraph says when I read it again…)

Personally, I use Dinghy with great success. It uses NFS, and has a daemon that watches for events on the host and sends them into the docker VM which simulates them so containers see them.

could you tell more about your solution? I am testing docker for mac now and ended up with cp to tmp folder to build my project, that is annoying :slight_smile:

To steer away people looking for a solution for shares, not matter if it is osxfs or something else (at least it should work), i created this discussion/solution here: Alternatives to OSXFS / performant shares under OSX - so this topic can stay on “when/how to fix osxfs specifically”.@olat this is also for your (docker-sync)

Hopefully this is the last attempt to keep it on topic, i am aware that i am not free of guilt here, sorry.


In regards to easily repeatable performance testing I found a pretty simple case that demonstrates the large performance gap.

In general, virtualbox volume mounts were about 4x faster than docker for mac. We consistently saw around 20MBs write throughput under virtualbox but around 4.5MBs using docker for Mac.

############# Docker for Mac

Current Docker Engine Version

→ docker -v
Docker version 1.12.0, build 8eab29e

Run default ubuntu:14.04 container

docker run --rm -v /tmp:/code -it ubuntu:14.04.4 bash

Volume mount from /tmp on macbook pro to /code in container. Seeing 4.5 MB/s!!

root@0da16bd185e9:/code# pwd
root@0da16bd185e9:/code# time dd if=/dev/zero of=test.dat bs=1024 count=100000

100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 22.5756 s, 4.5 MB/s

real 0m22.587s
user 0m0.100s
sys 0m1.060s

Writing the same file in container


root@0da16bd185e9:/code# cd ~
root@0da16bd185e9:~# pwd
root@0da16bd185e9:~# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.24733 s, 414 MB/s

real 0m0.249s
user 0m0.020s
sys 0m0.250s

########## Falling back to docker toolbox with virtualbox VM

root@93391eaa7a20:/code# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 4.91868 s, 20.8 MB/s

real 0m4.923s
user 0m0.000s
sys 0m2.380s

Not great, but certainly a lot faster.

While various suggestions have been made for alternative network filesystems or file syncing, has anyone considered the possibility of syncing/sharing at the block level rather than the filesystem level? If you imagine the host and the docker vm both as devices accessing shared storage and you treat the shared storage as a block device (think partition mount) then maybe solutions such as GFS2 would work. No idea how the block device mounting would work but I thought I’d mention it :slight_smile:

1 Like

I’ve gotten around this my setting up a mirror folder and syncing from my volume to that using Unison

Not a full working example, I’ve just pulled out the relevant parts. Here it syncs from /var/www/mirror to /var/www/html

Example docker-compose.yml:

  build: ./docker/web
   - "80"
   - .:/var/www/mirror

Dockerfile for web:

FROM php:5.6.20-apache

RUN apt-get update && apt-get install -y \
        supervisor \

RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
RUN mkdir -p /var/www/html

RUN mkdir -p /root/unison
COPY unison-2.48.4 /root/unison
WORKDIR /root/unison
RUN  make UISTYLE=text

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

VOLUME /var/www/html

CMD ["/usr/bin/supervisord"]



command=/bin/bash -c "cd /root/unison && ./unison /var/www/mirror /var/www/html -auto -batch -repeat=watch -retry=5 -ignore=\"Name {.git,*.swp}\""

command=/bin/bash -c "apache2-foreground"
1 Like