Hello everyone,
I would like to tell you a bit about the status of this issue, how/why it exists, what we are doing about it, what you can do to help us, and what you can expect in the future.
An apology
First, I’d like to apologize that we have been fairly quiet about this issue. Shared file system performance is a top priority for the Docker for Mac file system team but we will always prioritize fixing severe defects over performance improvement, in order to deliver high quality software to you.
Understanding performance
Perhaps the most important thing to understand is that shared file system performance is multi-dimensional. This means that, depending on your workload, you may experience exceptional, adequate, or poor performance with osxfs
, the file system server in Docker for Mac. File system APIs are very wide (20-40 message types) with many intricate semantics involving on-disk state, in-memory cache state, and concurrent access by multiple processes. Additionally, osxfs
integrates a mapping between OS X’s FSEvents API and Linux’s inotify API which is implemented inside of the file system itself complicating matters further (cache behavior in particular).
At the highest level, there are two dimensions to file system performance: throughput (read/write IO) and latency (roundtrip time). In a traditional file system on a modern SSD, applications can generally expect throughput of a few GB/s. With large sequential IO operations, osxfs
can achieve throughput of around 250 MB/s which, while not native speed, will not be the bottleneck for most applications which perform acceptably on HDDs.
Latency is the time it takes for a file system system call to complete. For instance, the time between a thread issuing write in a container and resuming with the number of bytes written. With a classical block-based file system, this latency is typically under 10μs (microseconds). With osxfs
, latency is presently around 200μs for most operations or 20x slower. For workloads which demand many sequential roundtrips, this results in significant observable slow down. To reduce the latency, we need to shorten the data path from a Linux system call to OS X and back again. This requires tuning each component in the data path in turn – some of which require significant engineering effort. Even if we achieve a huge latency reduction of 100μs/roundtrip, we will still “only” see a doubling of performance. This is typical of performance engineering, which requires significant effort to analyze slowdowns and develop optimized components. We know how we can likely halve the roundtrip time but we haven’t implemented those improvements yet (more on this below in What you can do).
There is hope for significant performance improvement in the near term despite these fundamental communication channel properties, which are difficult to overcome (latency in particular). This hope comes in the form of increased caching (storing “recent” values closer to their use to prevent roundtrips completely). The Linux kernel’s VFS layer contains a number of caches which can be used to greatly improve performance by reducing the required communication with the file system. Using this caching comes with a number of trade-offs:
-
It requires understanding the cache behavior in detail in order to write correct, stateful functionality on top of those caches.
-
It harms the coherence or consistency of the file system as observed from Linux containers and the OS X file system interfaces.
What we are doing
We are actively working on both increasing caching while mitigating the associated issues and on reducing the file system data path latency. This requires significant analysis of file system traces and speculative development of system improvements to try to address specific performance issues. Perhaps surprisingly, application workload can have a huge effect on performance. I will describe two different use cases and how their performance differs and suffers due to latency, caching, and coherence:
-
The
rake
example (mentioned upthread) appears to attempt to access 37000+ different files that don’t exist on the shared volume. We can work very hard to speed up all use cases by 2x via latency reduction but this use case will still seem “slow”. The ultimate solution forrake
is to use a “negative dcache” that keeps track of, in the Linux kernel itself, the files that do not exist. Unfortunately, even this is not sufficient for the first timerake
is run on a shared directory. To handle that case, we actually need to develop a Linux kernel module or patch which negatively caches all directory entries not in a specified set – and this cache must be kept up-to-date in real-time with the OS X file system state even in the presence of missing OS X FSEvents messages and so must be invalidated if OS X ever reports an event delivery failure. -
Running
ember build
in a shared file system results inember
creating many different temporary directories and performing lots of intermediate activity within them. An emptyember
project is over 300MB. This usage pattern does not require coherence between Linux and OS X but, because we cannot distinguish this fact at run-time, we maintain coherence during its hundreds of thousands of file system accesses to manipulate temporary state. There is no “correct” solution in this case. Eitherember
needs to change, the volume mount needs to have coherence properties specified on it somehow, some heuristic needs to be introduced to detect this access pattern and compensate, or the behavior needs to be indicated via, e.g., extended attributes in the OS X file system.
These two examples come from performance use cases contributed by users and they are incredibly helpful in prioritizing aspects of file system performance to improve. I am personally developing statistical file system trace analysis tools to characterize slow-performing workloads more easily in order to decide what to work on next.
Under development, we have:
-
A Linux kernel module to reduce data path latency by 2/7 copies and 2/5 context switches
-
Increased OS X integration to reduce the latency between the hypervisor and the file system server
-
A server-side directory read cache to speed up traversal of large directories
-
User-facing file system tracing capabilities so that you can send us recordings of slow workloads for analysis
-
A growing performance test suite of real world use cases (more on this below in What you can do)
-
Experimental support for using Linux’s inode, writeback, and page caches
-
End-user controls to configure the coherence of subsets of cross-OS bind mounts without exposing all of the underlying complexity
What you can do
When you report shared file system performance issues, it is most helpful to include a minimal Real World reproduction test case that demonstrates poor performance.
Without a reproduction, it is very difficult for us to analyze your use case and determine what improvements would speed it up. When you don’t provide a reproduction, one of us has to take the time to figure out the specific software you are using and guess and hope that we have configured it in a typical way or a way that has poor performance. That usually takes 1-4 hours depending on your use case and once it is done, we must then determine what regular performance is like and what kind of slow-down your use case is experiencing. In some cases, it is not obvious what operation is even slow in your specific development workflow. The additional set-up to reproduce the problem means we have less time to fix bugs, develop analysis tools, or improve performance. So, please include simple, immediate performance issue reproduction test cases. The rake
reproduction case by @hirowatari above is a great example (as are other contributions, thank you!) because it has:
-
A version-controlled repository so any changes/improvements to the test case can be easily tracked
-
A
Dockerfile
which constructs the exact image to run -
A command-line invocation of how to start the container
-
A straight-forward way to measure the performance of the use case
-
A clear explanation (README) of how to run the test case
What you can expect
Docker for Mac will be leaving Beta in the near future. It is unlikely that it will include major shared file system performance improvements before that time. However, we will continue to work toward an optimized shared file system implementation on the Beta channel of Docker for Mac. We have put a note about shared file system performance in the “Known Issues” section of the upcoming release notes and the Docker for Mac documentation.
You can expect some of the performance improvement work mentioned above to reach the Beta channel in the coming release cycles.
In due course, we will open source all of our shared file system components. At that time, we would be very happy to collaborate with you on improving the implementation of osxfs
and related software.
Finally, the nitty gritty details of shared file system performance analysis and improvement will be written up in more detail and published on the Docker blog. Do look out for those articles in the coming months as they will serve as a good jumping off point for understanding the system and, perhaps, measuring it or contributing to it.
Wrapping Up
I hope this has given you a rough idea of where osxfs
performance is and where it’s going. We are treating good performance as a top priority feature of the file system sharing component and we are actively working on improving it through a number of different avenues. The osxfs
project started in December 2015 (~7 months ago). Since the first integration into Docker for Mac in February 2016 (~5 months ago), we’ve improved performance by 50x or more for many workloads while achieving nearly complete POSIX compliance and without compromising coherence (it is shared and not simply synced). Of course, in the beginning there was lots of low-hanging fruit and now many of the remaining performance improvements require significant engineering work on custom low-level components.
I’d like to thank you for your understanding as we continue development of the product and work on all dimensions of performance. I’m excited to work with many of you as you report issues and, soon, collaborate on the source code itself.
Thanks for participating in the Docker for Mac Beta!
Best regards,
David Sheets