Docker Community Forums

Share and learn in the Docker community.

Plans to open osxfs?


(Itaylor) #1

Hi Docker for Mac team,

Any chance you’ll be making a repo for osxfs public? Like many others I’ve found Docker for Mac to be really awesome feature-wise, but the slowness of the shared folders implementation “osxfs” makes it not yet usable. If the repo for osxfs was open to the public, I’m sure you’d get people from the community who’d pitch in to help make it faster, sooner.


(Anil Madhavapeddy) #2

We are planning to, but it’s not quite ready to do so yet.

The osxfs code still has several performance improvements staged to go in, and now needs to be tested against a diverse set of real-world workloads. We could really use input on reproducible Dockerfiles or Compose files that slow down for you, especially if it’s blocking your use of shared folders.


(Cameron Eagans) #3

For me, it’s literally anything that mounts files from Mac OS. It’s a 2x-100x slowdown (per page load) depending on how many php files are being loaded for that request. I get the slow, methodical approach, but this is blocking many people from being able to use the beta in any meaningful way.


(Alexandre) #4

@cweagans You are obviously not alone and the Docker staff is aware of the situation. Do you have reproducible Dockerfiles or Compose files to contribute?


(Cameron Eagans) #5

@aleveille As I said, literally anything that mounts files from Mac OS reproduces the problem.


(Alexandre) #6

@cweagans As I said, Docker staff know that the mounted volumes are slow. Nonetheless, they are asking for “easily reproducible and representative macro benchmark for your use case” (How to speed up shared folders)

On my machine:
docker run -it --rm -v /tmp:/tmp nginx:1.9.3 dd if=/dev/zero of=/tmp/testfile bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 17.1037 s, 62.8 MB/s

Sadly, while we “know” that this is slow, it doesn’t say much. It would help them greatly to have numbers and benchmarks as targets. I doubt they have the time to go out there and build their own test suites and, anyway, a real use case is always better.

Something like “docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag” runs in 2.04s with Docker-machine and takes 27.18s with Docker for Mac is easier to use. It is also very easy to integrate in a test suite, which I expect they are beefing up.


(Cameron Eagans) #7

With the new beta, installing the Minimal profile in Drupal 8 takes 30 seconds. With native Docker on Travis CI, it takes 8 seconds. I can create a repo on Github with the requisite scripts and such if that would help.


(Alexandre) #8

That would probably help the staff further diagnose the underlying issues (there’s probably a few things to tweak under the hood). However, I would suggest that you share your repo in the following thread, as it is the one with the most traction and probably most watched by the Docker staff:
https://forums.docker.com/t/file-access-in-mounted-volumes-extremely-slow-cpu-bound/?source_topic_id=12404


(Alexandre) #9

To add to my last comment, I’ve been checking out the other thread a bit it and it seems that great performance gains were made for some people (Lavarel, Symphony). So if you can feed the staff a slow-performing use case, they can probably use that to fix areas that were missed.


(Ain Tohvri) #10

The dance you should do for the shared folder performance is quite frankly not worth it. The whole point of containerised application is to cut the curve on setup and maintenance. The discussions I’ve read to boost the performance, effectively defeat the point for the containerised environment. The team could as well go with the local setup and be faster/safe.