Docker Community Forums

Share and learn in the Docker community.

File access in mounted volumes extremely slow, CPU bound


(Douglas Ferguson) #97

Do you have instructions on how to do this?


(Michael Clifford) #98

I should have written a blog post so it is easily referenced but here are a couple of aliases I use (these assume you are using Virtualbox for your docker machine, the default machine, and your docker-machine ip is 192.168.99.100 (which will have a host IP of 192.168.99.1):

alias dockerUp='eval "$(docker-machine env default --shell bash)"'
alias dockerStart='docker-machine start default; docker-machine ssh default "sudo umount /Users; sudo /usr/local/etc/init.d/nfs-client start; sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp"; sleep 1; dockerUp'
DOCKER_RUNNING=`docker-machine ls|grep default|grep Running`
if [ $? -eq 0 ]; then dockerUp; fi 

Run the following command to put the proper entry in /etc/exports:

echo "/Users -mapall=`whoami`:`id -gn` `docker-machine ip default`" | sudo tee -a /etc/exports
# /etc/exports should now have a line similar to '/Users -mapall=mclifford:staff 192.168.99.100'

You should now be able to start your NFS enabled Docker Machine in a new terminal with:

dockerStart

(Alanbrent) #99

Wow, this thread blew up again :no_mouth: I just wanted to jump in here and contribute a workaround I came up with for our Rails projects. Basically, we just create symlinks for the primary high-write locations in the project. This doesn’t solve slower read access, but in my testing it’s really write speed that’s the big problem. Anyway, here’s a brief overview of what I did. I’m still a learning-to-be-a-dev ops guy, so hopefully what I’m doing here is both clear and applicable to (at the very least) your Rails projects.

Add to your entrypoint script

# Non-Linux development machines have abysmal file write performance in shared directories. This is a workaround
mkdir -p /tmp/project/log /tmp/project/tmp
ln -s /tmp/project/log log && ln -s /tmp/project/tmp tmp

Read vs. Write performance

# Write performance to host/container shared directory
root@96b8fdf0465d:/# dd if=/dev/zero of=/project/test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 26.0395 s, 3.9 MB/s
# Read performance from host/container shared directory
root@96b8fdf0465d:/# dd if=/project/test.dat of=/dev/null bs=1M count=1024 status=progress
99614720 bytes (100 MB, 95 MiB) copied, 1.02059 s, 97.6 MB/s
97+1 records in
97+1 records out
102400000 bytes (102 MB, 98 MiB) copied, 1.16331 s, 88.0 MB/s

(Michael Clifford) #100

@alanbrent :thumbsup: However, you can achieve the same performance gains by using a volume for tmp and mounting it to your apps tmp directory.

docker-compose pseudocode

version: "2"

services:
    web:
        image: your_co/your_rails_image
        volumes:
            - .:/app
            - tmp:/app/tmp
        command: bin/start

volumes:
    tmp:

(Alanbrent) #101

I like that even better, thanks!


(Hirowatari) #102

Just got back from DockerCon. Was hoping from all the talks about osxfs that Docker for Mac beta would be usable for Rails development now. Unfortunately, it is not.

A simple example

time rake -T

real	0m46.552s
user	0m0.009s

Running Version 1.12.0-rc2-beta16.

That’s about 100x slower than running on Ubuntu. Here’s a repo that shows the issue: https://github.com/hirowatari/docker-for-mac-rails-bug

Anyway, thanks for the great conference and for your work on this.


(Nikolai Zujev) #103

Same for me:

$ docker version
Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Fri Jun 17 22:09:20 2016
 OS/Arch:      linux/amd64
 Experimental: true

Not a volume at all:

/tmp # time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
real	0m 0.37s
user	0m 0.02s
sys	0m 0.35s

Mounted volume via docker-compose:

/var/www/html/var # time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
real	0m 33.31s
user	0m 0.15s
sys	0m 2.55s

100x times!! Seriously?


(Alexandre) #104

Same here, obviously.

The container was started using: docker run -it --rm -v /tmp:/tmp ubuntu bash.

Here’s the output of a sample dd:

$ pwd
/tmp

$ time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 30.7729 s, 3.3 MB/s

real	0m30.791s
user	0m0.160s
sys	0m1.150s

Meanwhile, on the host I was caputring system calls with dtruss: sudo dtruss -c -d -e -f -o -p 25329 2> /tmp/dtruss.log

25329/0x150c7a:     16678     143     20 write(0x6, "\r\0", 0x1)		 = 1 0
25329/0x150c7b:     11956     163     12 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     11974     191     13 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     11999      44      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12005      36      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12013      44      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12026      49      8 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12036      47      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12046      43      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12053      42      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12061      45      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12071      41      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12079      36      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12087      44      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12096      36      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12104      42      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12113      41      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12121      49      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12129      46      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12139     173      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12145      47      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12153      38      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12165      39      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12170      32      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12178      43      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12187      73      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12199     118      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c80:      2080    1945     16 kevent(0x5, 0x0, 0x0)		 = 1 0
25329/0x150c80:      2102      13      6 read(0x6, "\r\n\0", 0x1000)		 = 2 0
25329/0x150c80:      2128      21     16 write(0x1, "\r\n\0", 0x2)		 = 2 0
25329/0x150c80:      2137       5      1 read(0x6, "\0", 0x1000)		 = -1 Err#35
25329/0x150c80:      2145       6      1 kevent(0x5, 0x0, 0x0)		 = 0 0
25329/0x150c7b:     12207     181      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12220      61     11 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12230      41      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12247      50      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12254      38      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12262      40      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12268      39      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12274      35      3 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12281      37      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12287      37      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12294      39      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12299      34      3 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12304      37      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12310      38      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12317      38      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12322      36      3 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12330      38      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12337      40      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12344      38      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12352      44      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12360      41      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12368      39      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12375      35      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12384      41      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12392      63      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12400     129      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12407     195      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12416     416      6 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12423     806      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12431    1614      5 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12438    2621      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12448    6437      8 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c7b:     12459      35      2 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c80:      2158 30790240      7 kevent(0x5, 0x0, 0x0)		 = 1 0
25329/0x150c80:      2184      96      9 read(0x6, "100000+0 records in\r\n100000+0 records out\r\n102400000 bytes (102 MB, 98 MiB) copied, 30.7729 s, 3.3 MB/s\r\n\0", 0x1000)		 = 105 0
25329/0x150c80:      2209      20     15 write(0x1, "100000+0 records in\r\n100000+0 records out\r\n102400000 bytes (102 MB, 98 MiB) copied, 30.7729 s, 3.3 MB/s\r\n\0", 0x69)		 = 105 0
25329/0x150c80:      2217       5      1 read(0x6, "\0", 0x1000)		 = -1 Err#35
25329/0x150c80:      2223       5      1 kevent(0x5, 0x0, 0x0)		 = 0 0
25329/0x150c7b:     12477      55      7 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0
25329/0x150c80:      2234    1188      7 kevent(0x5, 0x0, 0x0)		 = 1 0
25329/0x150c80:      2253       9      4 read(0x6, "\r\nreal\t0m30.791s\r\nuser\t0m0.160s\r\nsys\t0m1.150s\r\n\033]0;root@93aefffad6f2: /tmp\aroot@93aefffad6f2:/tmp# \033[K\0", 0x1000)		 = 102 0
25329/0x150c80:      2289     109     14 write(0x1, "\r\nreal\t0m30.791s\r\nuser\t0m0.160s\r\nsys\t0m1.150s\r\n\033]0;root@93aefffad6f2: /tmp\aroot@93aefffad6f2:/tmp# \033[K\0", 0x66)		 = 102 0
25329/0x150c80:      2309      90      8 read(0x6, "\0", 0x1000)		 = -1 Err#35
25329/0x150c80:      2323       9      5 kevent(0x5, 0x0, 0x0)		 = 0 0
25329/0x150c7b:     12494      38      4 select(0x0, 0x0, 0x0, 0x0, 0x700000080DE8)		 = 0 0


CALL                                        COUNT
write                                           4
kevent                                          7
read                                            7
select                                         61

As you can see, there’s really one blocking steps if I understand this correctly. At the moment, I am unable to dig deeper, but I’ll see if I can find a way.


(Ryan Schlesinger) #105

There are also two popular projects for setting up this kind of environment:


(Tapajos) #106

Version 1.12.0-rc2-beta17 (build: 9779)

Not a volume:

root@8c23cfe51d65:/tmp# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.18435 s, 555 MB/s

real	0m0.188s
user	0m0.020s
sys	0m0.160s

Mounted volume:

root@8c23cfe51d65:/tmp# cd /mounted/
root@8c23cfe51d65:/mounted# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 23.5597 s, 4.3 MB/s

real	0m23.564s
user	0m0.410s
sys	0m1.080s

(Rompalmas) #107

Hi

Pretty much the same here, I tested a Nginx setup + PHP app with a mounted volume and serving a basic page takes ~3 seconds whereas the same page loaded without the volume (as in the container) takes 216ms.

Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Wed Jun 29 10:03:33 2016
 OS/Arch:      linux/amd64
 Experimental: true

(Jonathantech) #108

Docker for Mac: version: mac-v1.12.0-beta18
OS X: version 10.11.5 (build: 15F34)

With volume

root@07f6e31194b7:/tmp# pwd
/tmp
root@07f6e31194b7:/tmp# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 23.6226 s, 4.3 MB/s

real	0m23.626s
user	0m0.140s
sys	0m1.440s
root@07f6e31194b7:/tmp# exit
exit

without

docker run -it --rm ubuntu bash
root@bdacb4cc0ebc:/# cd tmp
root@bdacb4cc0ebc:/tmp# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB, 98 MiB) copied, 0.273326 s, 375 MB/s

real	0m0.276s
user	0m0.020s
sys	0m0.250s
root@bdacb4cc0ebc:/tmp#

(Onni Hakala) #109

I’m using unison data container for now as a hack to distribute files to containers without using mounted shared volumes. This isn’t as fast as shared volumes in ubuntu but it’s considerably more faster than using docker for mac.

Otherwise docker for mac feels really good :). I hope that the performance issues can be fixed soon.


(Jonathantech) #110

@onnimonni

I’ve never used unison. I understand that it does bi-directional syncing, but what happens when you first start it up. I didn’t see a way to specify a source and target in your source code so I’m just not sure if the docker image files will win out or the host’s files, or even how to properly set up the volumes for it.
I did see it defaults to a /data folder, but is that for source or target?
Could you provide documentation on how to setup docker-unison to sync host directory with a docker volume?
Thanks for what you’ve done so far! This could be far easier to use than the lsyncd setup I’ve been using!


(Jonathantech) #111

Oh, looks like you’ve updated the readme since I last looked at it, or my phone hid it :/. Thanks!


(Ciro S. Costa) #112

Looks cool @onnimonni, could you provide some numbers here? Thanks for sharing!


(Onni Hakala) #113

@cirocosta I’m non sure how to provide those in this scenario which would be as scientific as the numbers before in this conversation. Because writing is super fast on both sides but the sync is just only eventually consistent on other side.

I’m using this for running webpack with watch options inside container on project which has 13444 files. After I make changes to 1 file which triggers webpack it takes ~20-25s to run it using docker volume. Using this hack the same scenario takes ~5s.

I’m using this also for big wordpress projects and the webserver is blazingly fast using the unison sync. Setting this up is quite easy so just try it for your scenario.

The first sync just takes some time.

These are from my example project and the first sync takes about 10 seconds:

$ du -sh .                          
1.2G
$ find . -type f | wc -l            
20467

(Rompalmas) #114

Thanks man you saved my life :smiley:

This is obviously not a solution to the problem highlighted in this topic but a awesome workaround for me until they fix it!


(Aaron Jensen) #115

EDIT: I was doing something wrong here, I still had a osxfs mount as well (my config spanned two files, and the first declared mount won out)

@onnimonni unless I’m missing something, mounted volumes, even if synced with unison, are just as slow. I’ve got unison set up in the way you detail for docker-compose and:

# time dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 27.5858 s, 3.7 MB/s

real    0m27.589s
user    0m0.300s
sys     0m2.370s

Things are definitely fast on non-volumes, but I don’t know how to share a non-volume with another container. Your example here doesn’t even actually create/expose a volume: https://github.com/onnimonni/docker-unison#docker

Am I missing something?


(Dave Wikoff) #116

@aaronjensen What directory are you trying to sync? I haven’t used that docker container yet, but make sure you DON’T use the default osxfs volumes at all. It really looks like you’re stilling using them. Unison does not use mounts or volumes - it’s essentially (I’m over-simplifying) a glorified bi-directional rsync.