File access in mounted volumes extremely slow, CPU bound

I have to agree. Can be better, yes… but already really useful… yes. I use for all Drupal 7/8 sites with a my own custom UNISON container approach to managing this particular issue, and have no other issues. I’m looking forward to this being resolved, but doesn’t actually get in the way of any development work.

It still works fine if you aren’t sharing a large number of files in a volume. But for example, our test suite takes 20x as long running in docker vs not so people no longer run tests in docker for development.

I am concerned Docker has gone completely silent on this. I would like to know if this issue is being actively worked? I’m guessing not or we’d have gotten a recent update.

Any official update from a member of the Docker for Mac team would be great. There’s info in the docs specifically about this issue (https://docs.docker.com/docker-for-mac/osxfs/#performance-issues-solutions-and-roadmap), but I don’t think it has been updated for several months.

There’s an update about our future plans for osxfs performance on the issue tracker:

3 Likes

Hi everyone. A few months ago I encounter NFS limitation when I need to restore a database file with size more than 300MB. NFS simply time out when in use. So, I switched to use docker-sync instead. For the databases, I use named volumes and only pull the files out as a backup rather than mounting the data folder using NFS / osxfs / vboxsf. So, now I don’t use NFS or any shared folder anymore. The development synced to named volumes using unison wrapped by the docker-sync gem.

For those who struggle with the slowness of the available shared folder, try docker-sync and see if it help you in your work with docker.

Hey, I have used D4M for months now for Symfony and eZ Platform projects. I looking forward to testing the new :cached option or Docker when released. I don’t really like docker-sync as it duplicates everything… but anyway, why would you use NFS for your database? I have used D4M with DB bigger than that. But my database does not mount anything from the host.

To me, D4M should be used only to share the code between the container and the host for development purposes and that is it. I am also sharing the cache for PhpStorm plugins and some configuration files.

But other than that everything else should be in the container (no mount on the host) and then the perfs are native.

Was just a little feedback :slight_smile:

May the 4th be with you.

With todays release of docker-sync 0.4.0 we probably have a promising final solution.
Since the new :cached mode wont be to effective ( see https://github.com/EugenMayer/docker-sync/wiki/4.-Performance ) we will need a external solution anyway - :cached is still 50 times slower then native.

The biggest concerns using docker-sync were host-dependencies - and that has been removed - there are no host dependencies any longer - just a gem on your system ruby. No brew, unison or unox.

In addition to that, no filewatchers are used anymore, so no crazy CPU usage. OSXFS is used in a away that the OSX->sync-container sync is done using the low level implementation of OSXFS, but the mount into your app container is done using a live-copy - that gives you native performance while having a 2-way sync without hazzles.

Just try it yourself and check the performance table https://github.com/EugenMayer/docker-sync/wiki/4.-Performance

Have a try, would be happy to have some feedback on this.

1 Like

Tried every option and landed with this one. Making use of docker-sync (currently using 0.4.6) is by far the fastest solution. The real difference is noticeable especially with PHP applications. Would really like to see that the Docker team looks into integration this the Docker core somehow :slight_smile:

NOTE: This is a duplicate of https://github.com/docker/for-mac/issues/2141#issuecomment-337323903
but adding here if needed.

This issue has started for me in the last day for no apparent reason. I did
update to the latest Docker this week but problems did not occur directly
afterward. Still happens after fresh shut down and start (mostly).

Version 17.09.0-ce-mac35 (19611) Channel: stable a98b7c1b7c

macOS 10.12.6 (16G29)

I have a strong feeling that it is related to mounted volumes, but I’ve not been able to prove 100%.

UPDATE: I created an A/B test to replicate using a repo with README:

This includes full information as request here: https://docs.docker.com/docker-for-mac/osxfs/#performance-issues-solutions-and-roadmap
in section “WHAT YOU CAN DO”

My results using these benchmarks:

Bundled and app copied Docker image:

docker run --rm -it $DOCKER_REPO/derailed /bin/bash -l -c "time bundle exec rails r 'Rails.env'"

real    0m1.587s
user    0m1.270s
sys     0m0.260s

Base Ruby image, mounting app:

docker run --rm -it -w /derailed -v $(pwd):/derailed $DOCKER_REPO/derailed-dev /bin/bash -l -c "export BUNDLE_PATH=vendor/bundle-docker && bundle exec rake db:schema:load && time bundle exec rails r 'Rails.env'"

real    0m36.485s
user    0m2.100s
sys     0m2.220s

UPDATE2: I executed Docker Performance Testbench

Results:

delegated: drush site install 3x

136 seconds
135 seconds
130 seconds

delegated: drush cr 3x

18 seconds
16 seconds
17 seconds

cached: drush site install 3x

133 seconds
131 seconds
129 seconds

cached: drush cr 3x

17 seconds
17 seconds
17 seconds

consistent: drush site install 3x

156 seconds
151 seconds
147 seconds

consistent: drush cr 3x

20 seconds
21 seconds
20 seconds

Please note that with the latest release of Docker for Mac 17.12.0-ce-mac46, that d4m-nfs is broken and it is TBD if it can be fixed or should be retired - https://github.com/IFSight/d4m-nfs/issues/55

We’re using a similar approach to improve filesystem performance as @eugenmayer’s docker-sync, by using lsyncd to sync host mounts with named container volumes:

It’s only one-way sync, but it does not require any additional tools and only one additional Docker container to run any number of host mount to named volume setups.

The Dockerfile itself is super small and the overall setup very simple, so it does not add much complexity, while providing a lot of flexibility thanks to lsyncd's configurability.

I thought I’ll post it here if someone else if looking for a similar, simple solution.

docker-sync native_osx does not need anything on the host… that is all gone.

The problem with everything you put on rsync alikes, as lsyncd, is the file watcher - and that is what you will face. Its the huge CPU hog which will happen on any fswatch implementation on OSX, no matter its HFS+ or APFS - so what you did, we already went through and have with the rsync strategie (included in docker-sync).
But its very resource hungry and gets worse as longer is runs. It also will lose FS events if you branch change and things like that.

You can believe me, that solution is by no means different then rsync, unison local (unison-filewatcher) and probably every OSX native sync solution out there - detecting FS changes - thats the actual PITA. Syncing is super easy with every of those tools.

Thanks for your reply.
It was not my intention to declare our solution as superior or largely different, simply to propose a simple alternative setup.

I’m also by no means a file system events expert and believe that you’ve spent a lot of time in investigating the alternatives.

I’m aware of https://github.com/EugenMayer/docker-sync/issues/410 and understand that this issue affects anyone relying on osxfs file system events, like our setup does.

At least so far, the CPU/Memory usage of a single lsyncd container seems to be relatively low.
Anyway, that’s not due to our efforts but thanks to the creator of lsyncd.

ah, now i see - you are using a host-mount and the use file-system watcher on top of OSXFS.
Well yes, then you basically will run into the same issues like https://github.com/EugenMayer/docker-sync/issues/517 or the one you mentioned https://github.com/EugenMayer/docker-sync/issues/410

Since the issues are from host to container, you are affected in your strategy, since that is the only direction you sync.

lsyncd could be more effective or less effective then unison, i do not know. It could be a addition to docker-sync since the current rsync strategy uses fswatch on the host, does not use host-mounts at all - but thats actually the significant difference to native_osx and thus, for some people could work better.

lsyncd could just be “faster” but it will not be more reliable to the facts above - and yet rather few people did go after the speed of sync in those cases. Its rather docker-for-mac stopping FS events https://github.com/docker/for-mac/issues/2417

Whatever suits you - i am just hunting a “good solution” and wanted to point out, yours is not different to what we have seen so far and will struggle with exact the same symptoms ;/ Bad news for us all.

Yeah I agree that there’s no ideal solution yet.
I also hope that the Docker for Mac issue you linked to gets fixed and have added my thumbs up. :slight_smile: :+1:

Has anyone tried NFS mounts recently? In the latest stable release notes it says:

“Support NFS Volume sharing.”

I’ve not found any documentation and the only thread I’ve found about this is: NFS Native Support

Hi,

I am using MacOs High Sierra Ver 10.13.6. My project is a AngularJS project which uses Grunt and Bower components. When I try to build using docker-componse, it takes more than 3 minutes to build the container. Once the container is up and running, the TTFB (Time to first Byte) is more than 2 seconds for each JavaScript files. So after all loading of the website is really slow. I don’t have this problem when I am using docker run without volume mounting. The issue is getting the file from the Mac to container. I am using node:alpine image.

1 Like

Mounting consistency options in docker-compose.yml did not help. Am I testing it in a wrong way?

Docker version 19.03.1, build 74b1e89
MacOS Mojave: 10.14.6 (18G87)

docker-compose.yml:

version: '3.7'

services:

  php:
    image: php:7.3.8-fpm
    restart: always

    volumes:

      - ./mounts/ro:/test/mounts/ro:ro
      - ./mounts/cached:/test/mounts/cached:cached
      - ./mounts/delegated:/test/mounts/delegated:delegated
      - ./mounts/consistent:/test/mounts/consistent:consistent

Then:

docker-compose up -d
docker-compose exec php sh

Next, CDing into directories mounted with different consistency configuration and running:

dd if=/dev/zero of=test.dat bs=1024 count=100000

Consistent: 102400000 bytes (102 MB, 98 MiB) copied, 36.1782 s, 2.8 MB/s
Delegated: 102400000 bytes (102 MB, 98 MiB) copied, 35.1372 s, 2.9 MB/s
Cached: 102400000 bytes (102 MB, 98 MiB) copied, 34.3295 s, 3.0 MB/s
/var/www/html (non-mounted): 102400000 bytes (102 MB, 98 MiB) copied, 0.258 s, 397 MB/s

I have the same issue running a few containers to run a development version of WordPress. The idea is that WordPress is installed in the container and theme files are kept in sync between macOS’ file system and the container.

However, accessing the site is extremely slow. I’ve tried to use :cached and :delegated at some point but made no difference. Does anyone know if there are any new strategies to overcome this issue?

Here’s my docker-compose.yml:

services:
  wordpress:
    image: wordpress:latest
    container_name: app-wordpress
    environment:
      WORDPRESS_DB_HOST: app-mysql
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_DB_PASSWORD: password
      WORDPRESS_TABLE_PREFIX: "wp_"
      WORDPRESS_DEBUG: 1
    ports:
      - "8080:80"
    depends_on:
      - "database"
    volumes:
      # Allows changes made to project directory to be accessed by the container via a bind mount.
      - ./wp-content/themes/hello-elementor:/var/www/html/wp-content/themes/hello-elementor
      - ./wp-content/themes/hello-theme-child:/var/www/html/wp-content/themes/hello-theme-child
      # Uploads and plugins used by the site.
      - ./docker/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
      - ./docker/volumes/wordpress/wp-content/ai1wm-backups:/var/www/html/wp-content/ai1wm-backups
      - ./docker/volumes/wordpress/wp-content/plugins:/var/www/html/wp-content/plugins/
      - /var/www/html/wp-content/uploads/
      
      
  database:
    image: mysql:latest
    container_name: app-mysql
    # PHP mysqli connector does not support caching_sha2_password plugin so using mysql_native_password instead.
    command: "--default-authentication-plugin=mysql_native_password"
    environment:
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
    # This allows for the database to be consulted via a software such as SQL Workbench or TablePlus
    ports:
      - 3307:3306
    volumes:
      - ./docker/volumes/mysql:/var/lib/mysql
  composer:
    build:
      # Setting a context and dockerfile paths allows for Dockerfiles to be stored away in a sub-directory.
      context: . # Context of build, this is where the project files are stored.
      dockerfile: ./docker/php.dockerfile # The path to Dockerfile and name of the dockerfile to be built
    # Setting an image name avoids the same image being built multiple times.
    image: rotorotor/composer-tooling:latest
    depends_on:
      - "wordpress"
    volumes:
      # Allows changes made to project directory to be accessed by the container via a bind mount.
      - ${PWD}/wp-content/themes/hello-theme-child:/app
      - ${PWD}/wp-content/themes/hello-theme-child/vendor:/app/vendor/
    tty: true
  # Used to compile styles and scripts.
  node:
    # Building a custom image described in a docker file.
    build:
      # Setting a context and dockerfile paths allows for Dockerfiles to be stored away in a sub-directory.
      context: . # Context of build, this is where the project files are stored.
      dockerfile: ./docker/node.dockerfile # The path to Dockerfile and name of the dockerfile to be built
    # Setting an image name avoids the same image being built multiple times.
    image: rotorotor/node-tooling:latest
    # Specified the name of the container, commented out to avoid name conflicts when running multiple instances of the image.
    # container_name: protonmail_themes
    ports:
      - 3000:3000
      - 3001:3001
    depends_on:
      - "wordpress"
    restart: always
    volumes:
      # Allows changes made to project directory to be accessed by the container via a bind mount.
      - ${PWD}/wp-content/themes/hello-theme-child:/var/www/html/wp-content/themes/app
      # Adds a volume to store node dependencies.
      - /var/www/html/wp-content/themes/app/node_modules

Hey, guys. I had the same issue on Linux. Or maybe it’s not the same problem, but it could probably help someone else in the universe.

The file access on volumes inside containers was very very slow and the file acess was normal in other folders. After days of war I found that the volume had auditing enabled… so I turned it off.

I checked with auditctl -l and I noticed the line -w /var/lib/docker -k docker. I than removed the line with auditctl -w /var/lib/docker -k docker, restarted the container and the sun shone again. Dont forget to verify the audity rules (/etc/audit/rules.d/audit.rules).