How to speed up shared folders

Great, I hope a huge room of improvement on this point… on my end of 2013 macbook pro with 8G RAM & 2.5Ghz i5 CPU I have this kind of slowness:

docker run -w /app --rm -v $(pwd):/app -it alpine:latest time dd if=/dev/zero of=test.dat bs=4096 count=100000
100000+0 records in
100000+0 records out
real	0m 30.04s
user	0m 0.33s
sys	0m 0.91s

I’m on the last published beta release:

OS X: version 10.11.4 (build: 15E65)
Docker.app: version v1.11.0-beta7
Running diagnostic tests:
[OK]      docker-cli
[OK]      Moby booted
[OK]      driver.amd64-linux
[OK]      vmnetd
[OK]      osxfs
[OK]      db
[OK]      slirp
[OK]      menubar
[OK]      environment
[OK]      Docker
[OK]      VT-x

For the record, performance remains approx the same with beta8.

Hello,

Same with beta9, it’s unusable with shared volume (Symfony2 development).

Back to virtualbox + nfs.

OS X: version 10.11.4 (build: 15E65)
Docker.app: version v1.11.0-beta9
Running diagnostic tests:
[OK] docker-cli
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x

Hello!

Same here, I installed Docker Beta for Mac hoping for much better file share performance (Symfony being unusable with vboxfs file sharing), but it is still slow unfortunately (compared to NFS with Virtualbox, which is slow but usable).

@dsheets I can provide you a docker-compose.yml or Dockerfile file with a basic Symfony install to make some tests if needed.

@michaelperrin That would be very useful. We have a number of file sharing performance improvements under way but to measure progress we often look at low-level characteristics like various types of message latency and various configurations of read/write throughput. Having a larger example use case that has unacceptable performance currently would be great and will make it easier to report performance improvements. For instance, we’d like to say “3x speedup on the Docker for Mac Symfony Startup benchmark”.

@dsheets Thanks for your answer!

I have created a simple repository with a simple Symfony install. It allows you to:

  • Build an image with the necessary PHP configuration for Symfony (install Composer, etc.)
  • An entry point that installs the PHP dependencies for Symfony (aka. composer install)

You can have a look there: https://github.com/michaelperrin/docker-symfony-test .

Follow these steps:

  1. git clone https://github.com/michaelperrin/docker-symfony-test.git
  2. docker build -t sf-test .
  3. docker run -d -p 8080:8080 -v `pwd`:/var/www/symfony_project --name symfony_docker_test sf-test: this will be extremely slow the first time as it installs Symfony dependencies with Composer (and you will already notice that it is much too slow compared to normal Composer install without shared folder). Make sure everything is loaded before going to the next step using docker logs -f symfony_docker_test (you should see the server running).
  4. Open your browser and have a look to http://localhost:8080 .

On my end-2013 mac (running OS X 10.11.4, with Docker 1.11.1-beta10 (build: 6662)), the page takes 18 seconds to load!

Running the same app within a container that doesn’t share folders with the host takes about 20ms!

Tell me if I am doing something wrong!

I’m experiencing this same stuff with an ember-cli project. Since it uses Broccoli, it uses the disk for scratch space, and I end up going from 2 second builds to 60+ second builds in Docker for Mac.

I’d imagine you could recreate this just by dockerizing an example ember-cli app

@michaelperrin Thanks for your example, just tried it and here are the results:

  • first call when the symfony cache is created: 15 seconds
  • next call with cache created: 3 seconds

I’m using the latest beta for mac.

What I can say is that on a “real” symfony project, with a lot more of dependencies and I/O (Redis, Memcache, …) Docker for Mac is NOT usable at all with 30+ second response time…

So seems that results can be really different from one machine to another and from one project to another…

2 Likes

TL;DR Speed can be much improved by moving cache / logs and especially vendor dirs out of shared directories. Using the default structure (without moving these dirs outside shared directories) would however be a great way to make tests and improve Docker performance.

You are right, on my machine the first call is about 18 seconds, and the next ones are about 4 seconds, just slightly slower than on your machine.

The Symfony app is run in “dev” mode on the example I provided, which is much slower than the production mode. However the “dev” mode is required when developing a Symfony app, which is quite… obvious.

I made several tests to get a faster environment, and I got an interesting solution.

First, I moved the cache and logs dir to a non-shared directory, you can see the changes here: Move cache dir to non-shared folder for dev and test environments by michaelperrin · Pull Request #1 · michaelperrin/docker-symfony-test · GitHub .
I get 3 seconds response time for the welcome page in dev environment, this is about a 1 second win. Not bad, but nothing to brag about as this is a very simple app.

Second, I moved all the vendors to a non-shared dir too, changes are shown there: Move vendors to non-shared folder by michaelperrin · Pull Request #2 · michaelperrin/docker-symfony-test · GitHub .
This time (and without the first change mentioned above), I get response times of 700ms instead of 4 seconds: it’s a 3.3 seconds win!
That’s quite surprising as vendors are only read, not written when running the app. And I would have thought that the cache dir would have had much more impact given the fact that many files are generated.

With both changes (which are on master branch now), I get 300ms response times (still in dev mode!), a 3.7 seconds win!

I would prefer to use shared vendors so that I can have a look to vendor classes while developing, but doing this is worth the sacrifice as Symfony get usable.

Keeping the vendor dir in a shared dir would however be a great test to improve Docker file share performance.

1 Like

Vendors are the biggest issue and moving them to not shared directory is biggest performance boost but it actually kills a lot of docker benefits.

It’s because vendors are usually installed by composer (package manager) which is phar repository run by php cli. You usually want these vendors available for your host and you do not want to have php installed on your host. Things may easily get out of sync when you are installing vendors on your local environment and on your docker container. These vendors are quite often updated during development.

In fact ideal solution would have been if these vendors where not really shared but somehow available readonly for your host. That would make perfect sense.

Unfortunately you need vendors on your host to have for example type hinting available to your IDE.

So I hope that somehow explain what the problem is for non-PHP devs.

Edit: I think this will be an issue for all interpreted languages. I do not know in details how shared folders work and how php acceleration works but I can imagine that php accelerator checks when file was modified to know if it’s ok to use cached versions and these operations are really time consuming. I’ll try to dive into this over the weekend.

1 Like

Not having vendors on the host is quite an issue indeed as it’s really handy to have a look to the vendor files (from the host) and to get type hinting in an IDE indeed.

However that’s the only solution for now to set up an usable Symfony development environment with Docker. It’s a solution that is mentioned in the Symfony documentation for people using Vagrant without NFS too.

That’s really strange that Docker file share system makes apps so slow when vendors are shared. I really hoped that the more native approach of Docker beta would improve this.

Thanks for having a look to this anyway!

1 Like

I experienced similar issues and would like to share my workaround:
https://gist.github.com/sveneisenschmidt/35e9c45e15c0f8d395e17a996415a669

For overcoming the slow file reads and writes of the application server I introduced a data container with lsyncd running. The lsyncd service takes care that all inner-container folders are updated with the recent changes from the mount. This leads to a lot faster application execution: 30ms versus 10s.

This solution is far from perfect, it should only be treated as a workaround for local application development and should never be used on production or staging servers.

Having a displaced vendor or cache/logs folder was no solution for me because it would introduce another layer, the solution with lsyncd is somehow intransparent to me and feels quite similar to the native version except you do not have slow file writes when the applications warms up the caches.

Another optimization one can do would let composer dump the optimized autoloader as well.

I am very optimistic about the Docker people fixing this problem and I can live with a sync container for some time.

4 Likes

Thanks for posting the working around.

Can you talk a little bit more about how this works?

What sort of mounts do I need to create in my containers, etc?

Yeah, I am happy to help. Do you know how docker-compose works or do you use plain old docker files and commands?

We are currently using docker compose with vagrant/nfs.

I’m not familiar with the volumes from. Also, with rsync will the synchronization be 2 way or just one way?
The reason I ask if because we’d want changes developers make to code to appear inside the container as wells as file written in the container (i.e. logs) to be visible outside the container.

The sync will be effectively only oneway, so when the application writes down files they get not synced back. The setups works in the following way: Your local working folder gets mounted to /src on the sync container. The lsyncd deamon syncs all changes that happen in /src (Gist) to a non-mounted internal folder /var/www (Gist). The option volumes_from (Docs) takes care that all volumes from the sync container are available inside other containers (Gist).

Your application reads from /var/www (Gist) without the performance penalty of reading from a mount. The drawback is that sometimes changes in /src take some seconds to get synced to /var/www but mostly it is quite fast. Personally I can work with that better than with having huge application loading times but I guess this is just my personal taste.

For seeing logs I use docker exec container_xyz tail -f var/logs/*.log. By the way, would you mind to share your NFS solution?

1 Like

Thanks for posting this workaround @sveneisenschmidt. As you say, it’s not perfect, but it helps us get the job done while the Docker team continue to improve Docker For Mac :smiley:

1 Like

https://github.com/EugenMayer/docker-sync is a very efficient workaround. It’s pretty new but a lot of works was done those past days and more is coming: https://github.com/EugenMayer/docker-sync/pull/64

I’ve just confirmed the performance problem on beta20.

I last tested on beta6 and I really struggle to find any significant difference on a large Rails project. Disappointed, to say the least.

Have a look at the new http://docker-sync.io release guys. No need for unison/unox anymore while providing native performance with far less CPU usage: https://github.com/EugenMayer/docker-sync/wiki/4.-Performance

http://docker-sync.io

3 Likes