i confirm that NFS for a drupal/symfony project is a
The problem that docker team is trying to solve here is the live migrations of fs events from osx to linux and back, thatās why they choose to start with the ambitious osxfs project.
The Drupal details are a bit off topic, but drush cc all / drush cr in our current setup using the official drush/drush:8-php5-alpine with the web-root mounted via a volume executes in about 20 seconds. We use the opcache with plenty of cache to be able to hold everything in memory.
The 20 seconds could be way better - but it is still well within what I will accept for getting all the advantages Docker gives me.
OT: drush ācc allā itās just a matter of database tables, few files are involved, so cannot be used as a real benchmark.
I admit that danquah is right about drush / drupal is off offtopic, still i want to correctyou paolamainardi. If you really do not really know the facts, do not spread those informations. drush cc is a full drupal bootstrap mode, thus reads every single module file, calls all those 5000 file_exists calls and all things which do make drupal-stacks being so sensible to poor read performance.
drush cc all is a perfect benchmarks. It uses a lot of reads and also writes ( caching ).
@eugenmayer you are quite wrong, but this is not the place to discuss about drupal and drush, just make me reply that the āperfectā benchmark to me itās something that just stress the filesystem, the clear cache process involves 99% of its time on database and cpu. The āperfectā benchmark itās maybe a drush script which trigger just the registry rebuild process io-wait intensive + autoloader bootstrap and/or config files reload (D8), any database process should be involved to have a consistent test case.
last comment on this @paolomainardi. You might want to understand the meaning and implications of this https://github.com/drush-ops/drush/blob/master/commands/core/cache.drush.inc#L48 before you really stay on your opinion. Even though the read-heaviness is different for D8 in D7 ( in favour to D7 ) in favour to D8, its still a good rule of thumb.
The only reason i am telling this is, that people do not judge drush cc all as ānon useful benchmarkā - its quiet ok and the results are pretty useful.
Again, drush cc all involves a lot of io-wait with a lot of database with a lot of network calls, so it canāt be now and never a good filesystem benchmark candidate, as simple as that.
Docker team, please HELP
@dsheets Thanks for your informative post. But it didnāt mention anything about the host being CPU bound during file system access at all. I can live with osxfs file system access being somewhat slower than native, but when filesystem access locks a CPU core at 100% on the host it makes it difficult to run other applications on the host during development or other containers. Even if file system access was fast (near-native), if the host is locked at 100% to do it, thatās far from ideal.
For what itās worth, after upgrading to the latest stable version of docker for mac, Iām still experiencing the slowness.
Can anyone else confirm?
I just tried the released āDocker for Macā bits. Everything else is fine but I am finding that just a simple UNIX find operation on a mounted file system is 10-15 times slower. This is a showstopper for me, back to my previous solutionā¦
Are you using a volume in your dockerfile? When I do that, itās incredibly slow. Iām using other solutions to fix it (docker-sync mostly)
Many of the performance improvements that youāre talking about have been implemented a long time ago by NFS. While I understand that NFS certainly has itās drawbacks, would it be possible to offer NFS as an alternative that users can switch to while the issues in osxfs are ironed out? Despite @eugenmayerās assertion that NFS is too slow to be useful, Iām quite happy with it in most of my Vagrant environments. I work on large Drupal sites, so thereās some slowness, sure, but itās certainly not intolerable, and Iād consider it fast compared to osxfs right now. No offense intended - I know youāre working on it - but thatās what Iāve observed.
More broadly, Iām curious about why NFS (or some other existing/proven project) wasnāt chosen for the base to build on here. If that were the case, the only custom bits that would need to be built would be the event propagation from host -> vm and maybe some caching trickery to speed things up in the VM.
The biggest problem with NFS (for me) is that you donāt get fs events over the mount. Youād still need something else that can propogate those. (As your second paragraph says when I read it againā¦)
Personally, I use Dinghy with great success. It uses NFS, and has a daemon that watches for events on the host and sends them into the docker VM which simulates them so containers see them.
could you tell more about your solution? I am testing docker for mac now and ended up with cp to tmp folder to build my project, that is annoying
To steer away people looking for a solution for shares, not matter if it is osxfs or something else (at least it should work), i created this discussion/solution here: Alternatives to OSXFS / performant shares under OSX - so this topic can stay on āwhen/how to fix osxfs specificallyā.@olat this is also for your (docker-sync)
Hopefully this is the last attempt to keep it on topic, i am aware that i am not free of guilt here, sorry.
In regards to easily repeatable performance testing I found a pretty simple case that demonstrates the large performance gap.
tl;dr
In general, virtualbox volume mounts were about 4x faster than docker for mac. We consistently saw around 20MBs write throughput under virtualbox but around 4.5MBs using docker for Mac.
############# Docker for Mac
Current Docker Engine Version
ā docker -v
Docker version 1.12.0, build 8eab29e
Run default ubuntu:14.04 container
docker run --rm -v /tmp:/code -it ubuntu:14.04.4 bash
Volume mount from /tmp on macbook pro to /code in container. Seeing 4.5 MB/s!!
root@0da16bd185e9:/code# pwd
/code
root@0da16bd185e9:/code# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 22.5756 s, 4.5 MB/s
real 0m22.587s
user 0m0.100s
sys 0m1.060s
root@0da16bd185e9:/code#
Writing the same file in container
414MB/s!
root@0da16bd185e9:/code# cd ~
root@0da16bd185e9:~# pwd
/root
root@0da16bd185e9:~# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.24733 s, 414 MB/s
real 0m0.249s
user 0m0.020s
sys 0m0.250s
########## Falling back to docker toolbox with virtualbox VM
root@93391eaa7a20:/code# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 4.91868 s, 20.8 MB/s
real 0m4.923s
user 0m0.000s
sys 0m2.380s
Not great, but certainly a lot faster.
While various suggestions have been made for alternative network filesystems or file syncing, has anyone considered the possibility of syncing/sharing at the block level rather than the filesystem level? If you imagine the host and the docker vm both as devices accessing shared storage and you treat the shared storage as a block device (think partition mount) then maybe solutions such as GFS2 would work. No idea how the block device mounting would work but I thought Iād mention it
Iāve gotten around this my setting up a mirror folder and syncing from my volume to that using Unison
Not a full working example, Iāve just pulled out the relevant parts. Here it syncs from /var/www/mirror to /var/www/html
Example docker-compose.yml:
web:
build: ./docker/web
ports:
- "80"
volumes:
- .:/var/www/mirror
Dockerfile for web:
FROM php:5.6.20-apache
RUN apt-get update && apt-get install -y \
supervisor \
ocaml
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
RUN mkdir -p /var/www/html
RUN mkdir -p /root/unison
COPY unison-2.48.4 /root/unison
WORKDIR /root/unison
RUN make UISTYLE=text
WORKDIR /root
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
VOLUME /var/www/html
CMD ["/usr/bin/supervisord"]
supervisord.conf:
[supervisord]
nodaemon=true
[program:unison]
command=/bin/bash -c "cd /root/unison && ./unison /var/www/mirror /var/www/html -auto -batch -repeat=watch -retry=5 -ignore=\"Name {.git,*.swp}\""
stdout_events_enabled=true
stderr_events_enabled=true
[program:apache2]
command=/bin/bash -c "apache2-foreground"
stdout_events_enabled=true
stderr_events_enabled=true