One of the more common problems for Developers that use Windows is that the projects with Docker configuration work really slowly, to a point when sometimes a single browser request needs to wait 30-60 seconds to be completed. This is obviously a problem, one that negatively affects project progression and generally makes the life of developers more difficult.
Why is Docker so slow? The root of the issue is that Windows 10 is (was) using WSL (Windows Subsystem for Linux), which is a layer between Windows and Linux. Communication between these two (Hard Drive operations) can be quite slow.
It is odd that the slowness occurs on a bare metal Ubuntu installation as well. Please share how you install docker and the exact commands and or image you used to create your containers. Additionaly please add the output of docker info
In case you use bind-volumes (-v /host/path:/container/path), try if using named volumes (-v volume-name:/container/path) makes a difference. Personally I have never experienced docker being slow.
I am on Linux now. I tried many different configurations, but nothing changes. The same code is working very fast in AWS, but in local it goes very very slow with Docker. I really don’t know what I can do, I already tried many things.
Well, it seems that my post was hidden again, a shame. I sent you all the details you asked me. About the “Disk image location” is it important? Anyway, before I changed it, I was using the default one and it was running very slow as well. Thanks
Any plans on sharing the exact commands of what you tried? Judged by the shared information, the only thing we know is that containers are running slow on Windows and Linux, even though the machines they are running on are not slow.
Make sure you format your post properly so links in the code will not be interpreted and made clickable.
Since you did that on stack overflow, I would think that you did that here too, so the forum should not have banned your post.
The only reason I can think of to make Docker slow on Linux without using Docker Desktop is the fact that it uses a special filesystem. Usually overlay2. When you run Xampp on your machine, it will just use the filesystem of your host directly. If Docker data root is on a local HDD or SSD, it should not matter. In case you mount your data root from a network filesystem or some other special drive which is not compatible with overlay2 that can make Docker slower if it works at all.
In case of Docker Desktop, your local files would be mounted into the virtual machine using VirtioFS so those files would not be used directly. I think VirtioFS should be faster that other solutions, but I have never used Docker Desktop for Linux in a situation in which I would notice the speed difference. I use My Docker Desktop for Mac frequently (not for write-intensive or read-intensive applications) and it works fine. So in case of Docker Desktop you can take @meyay’s advice and use volumes instead of bind mounts whenever it is possible. In case of Docker CE running on the host directly on a local disk it should be as fast as running the application without containers since the process inside the container actually runs on your host, it is just isolated from the rest of the environment.
Since you shared your docker info on stackoverflow showing that you use Docker Desktop for Linux, you will most likely get answers to tell you the cause is that Docker Desktop uses a virtual machine to run your containers, so I recommend you to share the result of docker info running directly on your host not on Docker Desktop.
Hi, thanks for your answer.
Before I tried the desktop version I worked with Docker CE and I got the same results, very slow…
On the other hand, I already use volumes for Mysql but not for the app, because I need to be able to sync with my host machine in order to work with the code, that’s why I use “host volumes” (I think it is the same than bind right?). I will try today in a MAC and check the differences.
update: you gave an idea mate, I am moving outside the host volume (to volumes) all the code that not need to be bind with the host, for example, vendors and node_modules, will see what happens
Yes, I got that. I that’s why I say you should share the output of Docker CE so nobody will say that it is slow because of the Desktop.
Probably, but I never heard to say that in case of Docker. I searched for it now and it looks like many articles refer to bind mounts as host volumes but I don’t think it is mentioned this way in the documentation. Kubernetes has “hostPath” volume meaning the same.
Just to be sure, you don’t store /var/lib/docker or the mounted fodlers on a network filesystem or any special device, right?
Can you tell us anything about the disk on which you store the data? SSD, HDD? Frankly I am not sure what I should ask, so I am trying to ask for any information that at least can give us an idea.
Is the container slow all the time or is a page faster when you load it the second time?
Using process isolation can’t slow PHP down, so it must be something with the filesystem or some special hardware that we don’t know so we can’t even ask about that.
I would also make sure not to save any large file in the Docker image that should be changed after running the container. When the a process changes a file in a container that was on image filesystem layer, first it will copy the whole file to the container’s filesystem which can take time at the first time.
I used PHP (Symfony Framework) in Docker containers and I never had any problem like you have.
The docker is in an ssd partition, it is not a network system or any special device.
size: 923 GB — 886 GB free (4.0% full)
Contents: Ext4 (version 1.0) — Mounted at /media/xulin/home
Partition Type: Linux Filesystem (System)
Is the container slow all the time or is a page faster when you load it the second time?
Same time, the cache does not help, that’s why I think it is more related to the Mysql container, but it is kind of weird because this Mysql container is a volume and I am using a mysql 5.7 official image. In the dashboard does not matteri if it is the first or the number 10 load, the pages take always around 20/30 seconds.
I am working on a Cakephp 3 application.
Note: I realized that the frontend is loading a bit faster (I can say near to normal) than the dashboard, and this is why I start thinking that it is related to the MySQL container because the dashboard has more queries, results etc… And the frontend it is React based (with less Mysql queries).
Here is again the docker info just in case (I moved docker to a bigger partition, in the same ssd)
Or maybe it is the speed of PHP. What happens when you disable XDebug? If you have XDebug in Xampp with better speed it is still not normal to be so slow in Docker, but it could be because of some extensions or the XDebug version and using overlay filesystem together.
Thanks for your help mate, I think Xdebug was the problem, at least the big one, the loading time passed from 25.15s to 12.43s (but in AWS (EB) it is half, around 6s), and this is a big improvement!!! Anyway possible there are more things I can do to improve the velocity, but this is a big change, thanks again.
Same issue , I run a mysql on docker , this feature is ok last few months. Yesterday I found my docker data is deleted and I update new docker desktop windows. Now when I run mysql on ssd disk , write sql performence is OK, but I move docker data to mechanical hard drive ， I write sql very slowly . A 2mb sql file import mysql will spend half-hour .I don’t know why.
I know about multiple confirmed and suspected issues related to Docker Desktop, which is unfortunate, I find it hard to believe that it would let you lose data. This is itself something that you could report in a separate topic or directly on GitHub:
BEfore you do that, make sure you have only one Docker instance, only Docker Desktop for Linux or only Docker CE, because some users thought they lost volumes but they just created a volume in an other Docker context. An other reason could be that you stored the data on a container’s filesystem, not on a volume and when the container was recreated, it did not have the data.
Of course, losing data is not impossible so if you have important data it is recommended to create a backup before upgrading Docker Desktop. Even if you do that, I think this issue could get a higher priority then the fact that Docker Desktop is very slow for some people. Unless it is extremely slow for everyone, but I did not have that experience yet. Due to the virtualization and communication between the host and the VM it can be slow, but it should not be extremely slow. Although it depends on the exact case.
Can you share a demo application that does not contain any sensitive data but demonstrates how slow a query is so I can try it on my machine?
This tool was created to give Windows and macOS users a Linux base required for containers.
I guess if you are coming from one of those OS’s it makes sense to do so to avoid retraining muscle memory. But trust me, knowing how to do this directly on the CLI will prove to be much more performant and won’t decrease the lifetime of cooling fans on your computing device as I’m sure @r0bertinski can attest to.
Having said all that, maybe you could try Rancher Desktop? I haven’t used it myself but it might be more performant - I hope.