Its now killing my computer space. I’m in the midst of ingesting 537 csv files (total 10.7GB) into elasticsearch.
C:\Users\ethan\AppData\Local\Docker\wsl\data
I’m not sure why it became so big.
Share and learn in the Docker community.
Its now killing my computer space. I’m in the midst of ingesting 537 csv files (total 10.7GB) into elasticsearch.
C:\Users\ethan\AppData\Local\Docker\wsl\data
I’m not sure why it became so big.
I did a complete refresh of everything.
So the root cause is my wsl data hard disk image file. It grows scaringly. I’m not sure why. Even now when I didnt volume mount my data, the drive is already about 8.5GB big.
It probably contains downloaded Docker images and the files created during container run.
I don’t know about Docker Desktop, in Linux CLI you can do docker prune
to clean up.
Why would volume mount increase the disk size? Assuming you mean mounting the files from the host.
Apologies for confusion.
Yes I meant mounting files from the host.
The question still remains. Why did you say “even now”? Would you expect it to change the disk size? The virtual machine image could have a minimum size. You mentioned volume mounting but not whether you started a container or not so I guess you did. Depending on what containers you ran as @bluepuma77 wrote, images could need space as well. As for the previous much bigger size, you could have a lot of unused images, even “dangling” imageswithout tags, you could have build cache and so on. Once a VM image size increases, it usually never decreases automatically. On Mac I cans et a maximum disk size. I don’t remember how it works on Windows.
@bluepuma77 also mentioned “docker prune
” which is actually “docker system prune
”, but it indeed can reclaim space. You can try it and see if that changes anything. I wouldn’t be surprise it didn’t.
“docker system df
” can show you how much space Docker use.
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 3 2 3.057GB 766.5MB (25%)
Containers 3 2 1.32GB 0B (0%)
Local Volumes 3 3 2.02GB 0B (0%)
Build Cache 0 0 0B 0B
My issue is simple. Why is my “C:\Users\ethan\AppData\Local\Docker\wsl\data\ext4.vhdx” being 25.6GB huge when I’m merely ingesting 492MB worth of csv files?
I checked my configs in logstash.yml and logstash.conf.
I bind mounted these files to logstash to be piped in.
When I try to use diskpart, compact vdisk
I get a DISKPART> compact vdisk
DiskPart has encountered an error: The process cannot access the file because it is being used by another process.
See the System Event Log for more information.
I have tried shutting down docker and killing off vmmem.wsl
so that I can use diskpart
to compact vdisk
. I could cut the file size fo 8.5GB.
However when docker restarts, the file grows again.
THis is not good. Im not sure what else to troubleshoot.
I think we understood what you thought the problem was, but without also knowing what the size of your docker data is (which you shared now) we can’t rule out that something else takes up the space which you don’t think of.
I don’t use Docker Desktop on Windows usually, so I don’t exactly know how the virtual disk is created and what the initial size is and how it grows. You could try to run a shel in the virtual machine and try to find out if there is anything that actually requires that space. Maybe not now, but if there was something, I don’t think the disk size will decrease, but I could be wrong.
Docker Extension also run in the virtual machine as Docker containers but you normally can’t see it in the list of containers and images without manualyl enabling it
So I can’t speak on behalf of @bluepuma77, but I have to say that my short answer is that I don’t know the answer. It would require more investigation. I can’t promise, but I will try to check my Windows machine later to see If I can find out how it works.
I’m just on a single laptop computer running Window 11.
I dont have / using any Virtual Machines right now.
Thanks for helping to check things out. I know its smth to do with how wsl2 reserves excessive space in the ext.vhdx hard disk image file.
As described. I tried to search around on how to trim / deflate it but the problem doesn’t go away when u start piping in data once again.
I also feel a little bit of “i dont even know what I dont know - for the problem.”
This time being, I am working on improving the mappings in Elasticsearch. I getting alot of columns indexed as text instead of values.
WSL2 is a virtual machine. There is no Linux container on Windows without a virtual machine and Docker Desktop always uses a virtual machine even on Linux. Otherwise you wouldn’t need a virtual disk, would you.
At 1 million rows ingested, the disk increased a little slower from 10.5 GB → 13.5 GB
I did some improvements to streamline how the mapping is done on the columns.
Disk is growing still…but it does feel growing a little slower.
Not sure how much more can I improve on things to make the Disk grow minimally.
Now I am only ingesting 30 days of csv file. Each day about 137000 rows of data.
Eventually I want to ingest 18 months worth of csv files of 137000 rows of Data.
I only have about 200GB of free space to work with. I dont want to burst my file space budget.
I might even, have more csv files to ingest in future - of other data sets.
PS: My files ingestion stopped midway. But the disk is still growing now at 25 GB! This is astonishing!
Now I had a little more time think about this issue. At the beginning I didn’t think of what containers you ran, but you mentioned now multiple times thaty ou use Elastic search and Kibana. Elastic search which could use a lot of space.Depending on how you run it, it would be stored on the container’s filesystem or on a volume, but both would be in the virtual machine unless you bind mount a folder from the host, which I wouldn’t do for Elastic search (Windows vs Linux filesytem).
I tried to reproduce any unexpected behavior, but so far I couldn’t.
docker system df
(502 in the output of docker image ls
). Small differences could be caused by anything in the virtual machine. Docker could also save temporary files.Again, when the disk is written and the operating system sees that it has a large disk, it could write anywhere on the virtual disk, so I guess it could write on the part of the virtual disk where nothing was written yet so the disk size grows. If someone knows that it doesn’t work that way, feel free to correct me.
I tested only with images, because that requires hundreds of megabytes which is good for testing and I don’t know what you do exactly.
When your disk size was 25GB did you check the output of docker system df
again? Any data that elasticsearch writes during indexing can grow tha size of the volume or container. And if I remember correctly, elasticsearch requires more space than the size of the data it needs to store at least temporarily when merges small files into one big file but the old files are still there.
Unfortunately I don’t have more time to investigate.but one thing is sure know. The disk size will not decrease automatically without docker reset, which recreates the VM using the original base disk.
File sizes aint too bad esp for Elastic Search
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 3 3 3.057GB 72.79MB (2%)
Containers 4 3 1.32GB 1.318GB (99%)
Local Volumes 3 3 1.803GB 0B (0%)
Build Cache 0 0 0B 0B
Please, use code blocks so we can see the indentation of the output you share.
```
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 3 3 3.057GB 72.79MB (2%)
Containers 4 3 1.32GB 1.318GB (99%)
Local Volumes 3 3 1.803GB 0B (0%)
Build Cache 0 0 0B 0B
```
I edited your comment.
I assume you ran the command again and got the same sizes as before so your disk size is 25GB and the extra size is not part of the images, containers or volumes. Since we are not there, hard to find out more about your environment. I share the command which I use to get into the virtual machine’s shell if you want to continue the investigation.
docker run --rm -it --privileged --pid host ubuntu nsenter --all -t 1 sh
You could also try to ask the Docker support if you have subscription. I only know what I already found out on systems I actually use with softwares I use in containers, but I use MacOS. I can set a disk limit there, but I never used Docker Desktop for elastic search.
On my Windows machine the original size of my virtual disk before resetting was about 23GB. I had multiple images including Windows images (not in the Linux VM of course) but I didn’t check the actual size in the VM before the reset.
I’m sorry, but I’m out of ideas for now.
Issue i have when posting in this forum is my 1 image/screenshot restriction. How to get this restriction lifted?
If I can post more screenshots can go a long way.
I dont have Docker Subscription.
My Environment
3 docker containers: Elasticsearch , Kibana, Logstash running on localhost in my laptop.
I have just
Now what?
What screenshots would you like to share? Your lst screenshot was about a shell command I shared and you got a prompt. That really doesn’t require a screenshot and screenshots are hard to read sometimes.
Anything you want If I could tell you what to do, I could do it, but I can give you some tips. I used the “df -h
” command to get the internal disk and mount sizes. I used “du -sh
” and “du -shx
” to find out the size of the content of a folder. The Docker data fodler is at “/var/lib/docker
”. The containerd containers for docker components are in /containers
, but you can find fodlers in “/var/lib
” as well. There is also the ctr command for containerd:
ctr -n services.linuxkit c ls
This lists containers for example. I had a video about this and there is a blogpost about that here:
Interestingly, I saw that the container of the docker daemon on windows is “01-docker” but it is “02-docker” on macOS. I knew that the number should mean something, but I still don’t know what exactly.
In Elastic forum, I can just paste screenshots as and when I type my post on the fly. I post screenshots for the best visualisation of my workspace and file explorer contexts.
Filesystem Size Used Available Use% Mounted on
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro
none 3.8G 0 3.8G 0% /parent-distro/dev
none 3.9G 0 3.9G 0% /parent-distro/dev/shm
tmpfs 3.9G 0 3.9G 0% /parent-distro/sys/fs/cgroup
none 3.9G 8.0K 3.9G 0% /parent-distro/run
none 3.9G 0 3.9G 0% /parent-distro/run/lock
none 3.9G 0 3.9G 0% /parent-distro/run/shm
none 3.9G 0 3.9G 0% /parent-distro/run/user
none 475.8G 280.8G 195.0G 59% /parent-distro/usr/lib/wsl/drivers
none 3.9G 0 3.9G 0% /parent-distro/usr/lib/wsl/lib
none 3.9G 4.0K 3.9G 0% /parent-distro/mnt/host/wsl
/dev/sde 1006.9G 17.4G 938.2G 2% /parent-distro/mnt/host/wsl/docker-desktop-data/isocache
none 3.9G 8.0K 3.9G 0% /parent-distro/mnt/host/wsl/docker-desktop/shared-sockets/guest-services
none 3.9G 8.0K 3.9G 0% /parent-distro/mnt/host/wsl/docker-desktop/shared-sockets/host-services
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro/mnt/host/wsl/docker-desktop/docker-desktop-user-distro
/dev/loop0 446.7M 446.7M 0 100% /parent-distro/mnt/host/wsl/docker-desktop/cli-tools
none 3.9G 80.0K 3.9G 0% /parent-distro/mnt/host/wslg
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro/mnt/host/wslg/distro
none 3.9G 76.0K 3.9G 0% /parent-distro/mnt/host/wslg/versions.txt
none 3.9G 76.0K 3.9G 0% /parent-distro/mnt/host/wslg/doc
none 3.9G 80.0K 3.9G 0% /parent-distro/tmp/.X11-unix
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/c
drvfs 476.9G 46.1G 430.8G 10% /parent-distro/mnt/host/d
/dev/loop1 147.5M 147.5M 0 100% /
/dev/loop2 717.6M 717.6M 0 100% /services.iso
/dev/loop2 717.6M 717.6M 0 100% /containers/services
/dev/sdd 1006.9G 61.4M 955.6G 0% /mnt
none 3.9G 4.0K 3.9G 0% /mnt/host/wsl
/dev/sde 1006.9G 17.4G 938.2G 2% /mnt/host/wsl/docker-desktop-data/isocache
none 3.9G 8.0K 3.9G 0% /mnt/host/wsl/docker-desktop/shared-sockets/guest-services
none 3.9G 8.0K 3.9G 0% /mnt/host/wsl/docker-desktop/shared-sockets/host-services
/dev/sdd 1006.9G 61.4M 955.6G 0% /mnt/host/wsl/docker-desktop/docker-desktop-user-distro
/dev/loop0 446.7M 446.7M 0 100% /mnt/host/wsl/docker-desktop/cli-tools
none 3.9G 80.0K 3.9G 0% /mnt/host/wslg
/dev/sdd 1006.9G 61.4M 955.6G 0% /mnt/host/wslg/distro
none 3.9G 76.0K 3.9G 0% /mnt/host/wslg/versions.txt
none 3.9G 76.0K 3.9G 0% /mnt/host/wslg/doc
drvfs 475.8G 280.8G 195.0G 59% /mnt/host/c
drvfs 476.9G 46.1G 430.8G 10% /mnt/host/d
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro/mnt
none 3.9G 4.0K 3.9G 0% /parent-distro/mnt/host/wsl
/dev/sde 1006.9G 17.4G 938.2G 2% /parent-distro/mnt/host/wsl/docker-desktop-data/isocache
none 3.9G 8.0K 3.9G 0% /parent-distro/mnt/host/wsl/docker-desktop/shared-sockets/guest-services
none 3.9G 8.0K 3.9G 0% /parent-distro/mnt/host/wsl/docker-desktop/shared-sockets/host-services
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro/mnt/host/wsl/docker-desktop/docker-desktop-user-distro
/dev/loop0 446.7M 446.7M 0 100% /parent-distro/mnt/host/wsl/docker-desktop/cli-tools
none 3.9G 80.0K 3.9G 0% /parent-distro/mnt/host/wslg
/dev/sdd 1006.9G 61.4M 955.6G 0% /parent-distro/mnt/host/wslg/distro
none 3.9G 76.0K 3.9G 0% /parent-distro/mnt/host/wslg/versions.txt
none 3.9G 76.0K 3.9G 0% /parent-distro/mnt/host/wslg/doc
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/c
drvfs 476.9G 46.1G 430.8G 10% /parent-distro/mnt/host/d
tmpfs 788.8M 504.0K 788.3M 0% /run
tmpfs 788.8M 0 788.8M 0% /tmp
tmpfs 3.9G 0 3.9G 0% /var
/dev/sde 1006.9G 17.4G 938.2G 2% /var/lib
/dev/sdd 1006.9G 61.4M 955.6G 0% /usr/lib/wsl
none 475.8G 280.8G 195.0G 59% /usr/lib/wsl/drivers
none 3.9G 0 3.9G 0% /usr/lib/wsl/lib
/dev/sdd 1006.9G 61.4M 955.6G 0% /run/desktop/mnt
none 3.9G 4.0K 3.9G 0% /run/desktop/mnt/host/wsl
/dev/sde 1006.9G 17.4G 938.2G 2% /run/desktop/mnt/host/wsl/docker-desktop-data/isocache
none 3.9G 8.0K 3.9G 0% /run/desktop/mnt/host/wsl/docker-desktop/shared-sockets/guest-services
none 3.9G 8.0K 3.9G 0% /run/desktop/mnt/host/wsl/docker-desktop/shared-sockets/host-services
/dev/sdd 1006.9G 61.4M 955.6G 0% /run/desktop/mnt/host/wsl/docker-desktop/docker-desktop-user-distro
/dev/loop0 446.7M 446.7M 0 100% /run/desktop/mnt/host/wsl/docker-desktop/cli-tools
none 3.9G 80.0K 3.9G 0% /run/desktop/mnt/host/wslg
/dev/sdd 1006.9G 61.4M 955.6G 0% /run/desktop/mnt/host/wslg/distro
none 3.9G 76.0K 3.9G 0% /run/desktop/mnt/host/wslg/versions.txt
none 3.9G 76.0K 3.9G 0% /run/desktop/mnt/host/wslg/doc
drvfs 475.8G 280.8G 195.0G 59% /run/desktop/mnt/host/c
drvfs 476.9G 46.1G 430.8G 10% /run/desktop/mnt/host/d
none 3.8G 0 3.8G 0% /dev
none 3.9G 0 3.9G 0% /dev/shm
cgroup 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 4.2M 3.8G 0% /containers/services/01-docker/tmp
overlay 3.9G 4.2M 3.8G 0% /containers/services/01-docker/rootfs
/dev/sde 1006.9G 17.4G 938.2G 2% /var/lib/wasm/runtimes
tmpfs 3.9G 0 3.9G 0% /containers/services/acpid/tmp
overlay 3.9G 0 3.9G 0% /containers/services/acpid/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/artifactory/tmp
overlay 3.9G 0 3.9G 0% /containers/services/artifactory/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/binfmt/tmp
overlay 3.9G 0 3.9G 0% /containers/services/binfmt/rootfs
tmpfs 3.9G 32.0K 3.9G 0% /containers/services/container-filesystem/tmp
overlay 3.9G 32.0K 3.9G 0% /containers/services/container-filesystem/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/devenv-service/tmp
overlay 3.9G 0 3.9G 0% /containers/services/devenv-service/rootfs
tmpfs 3.9G 4.0K 3.9G 0% /containers/services/diagnosticsd/tmp
overlay 3.9G 4.0K 3.9G 0% /containers/services/diagnosticsd/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/oom-tracer/tmp
overlay 3.9G 0 3.9G 0% /containers/services/oom-tracer/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/otel-collector/tmp
overlay 3.9G 0 3.9G 0% /containers/services/otel-collector/rootfs
tmpfs 3.9G 4.0K 3.9G 0% /containers/services/sntpc/tmp
overlay 3.9G 4.0K 3.9G 0% /containers/services/sntpc/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/socks/tmp
overlay 3.9G 0 3.9G 0% /containers/services/socks/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/trim-after-delete/tmp
overlay 3.9G 0 3.9G 0% /containers/services/trim-after-delete/rootfs
tmpfs 3.9G 16.0K 3.9G 0% /containers/services/volume-contents/tmp
overlay 3.9G 16.0K 3.9G 0% /containers/services/volume-contents/rootfs
tmpfs 3.9G 0 3.9G 0% /containers/services/vpnkit-forwarder/tmp
overlay 3.9G 0 3.9G 0% /containers/services/vpnkit-forwarder/rootfs
overlay 1006.9G 17.4G 938.2G 2% /var/lib/docker/overlay2/96597e8a989939e9303824ec4249ed0602f8db26c19f3cf110fc6cb19875309f/merged
overlay 1006.9G 17.4G 938.2G 2% /var/lib/docker/overlay2/7d06f8c03f3e1f4c1660589f0111f465588b30f714b891aeb1f12a11c03633ed/merged
drvfs 475.8G 280.8G 195.0G 59% /run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a3daf95f44fa2a75e75bc1f4e23425c9196511ddf86d3c26b202a145d894a616
drvfs 475.8G 280.8G 195.0G 59% /mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a3daf95f44fa2a75e75bc1f4e23425c9196511ddf86d3c26b202a145d894a616
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a3daf95f44fa2a75e75bc1f4e23425c9196511ddf86d3c26b202a145d894a616
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a3daf95f44fa2a75e75bc1f4e23425c9196511ddf86d3c26b202a145d894a616
drvfs 475.8G 280.8G 195.0G 59% /run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a89f45e6c9b16d318769a6dc4719bcf164c21bca5fb9d22aebb115ec0b54d341
drvfs 475.8G 280.8G 195.0G 59% /mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a89f45e6c9b16d318769a6dc4719bcf164c21bca5fb9d22aebb115ec0b54d341
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a89f45e6c9b16d318769a6dc4719bcf164c21bca5fb9d22aebb115ec0b54d341
drvfs 475.8G 280.8G 195.0G 59% /parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu/a89f45e6c9b16d318769a6dc4719bcf164c21bca5fb9d22aebb115ec0b54d341
overlay 1006.9G 17.4G 938.2G 2% /var/lib/docker/overlay2/b0746dfef5c78a95157197702351f9f82d4f0efc7c3df0f74af807c565c05bdf/merged
overlay 1006.9G 17.4G 938.2G 2% /var/lib/docker/overlay2/0e48bd4c4e683e7374a585c3bf932712f79561d8f2a061591734859510b8ea2b/merged
du: can't open './mnt/host/c/$Recycle.Bin/S-1-5-18': Permission denied
du: can't open './mnt/host/c/$Recycle.Bin/S-1-5-21-3501046383-4099552786-1745244351-500': Permission denied
du: ./mnt/host/c/hiberfil.sys: Permission denied
CONTAINER IMAGE RUNTIME
01-docker - io.containerd.runc.v2
acpid - io.containerd.runc.v2
artifactory - io.containerd.runc.v2
binfmt - io.containerd.runc.v2
container-filesystem - io.containerd.runc.v2
devenv-service - io.containerd.runc.v2
diagnosticsd - io.containerd.runc.v2
oom-tracer - io.containerd.runc.v2
otel-collector - io.containerd.runc.v2
sntpc - io.containerd.runc.v2
socks - io.containerd.runc.v2
trim-after-delete - io.containerd.runc.v2
volume-contents - io.containerd.runc.v2
vpnkit-forwarder - io.containerd.runc.v2
/ # du -shx
I slept for 8 hours, i didnt do anything. Data completed pipe-in shortly after I slept.
For about 7 hours of inactivity - my ext4.vhdx still continued to grow to now 44GB!
I am dumbfounded
This is disappointing - this a known issue from 2019.
My root-cause file is a data partition file. I use wslcompact docker-desktop-data i dont seem to get much help.
PS C:\Users\ethan> wslcompact docker-desktop-data
WslCompact v8.7 2023.03.01
(C) 2023 Oscar Lopez
wslcompact -h for help. For more information visit: https://github.com/okibcn/wslcompact
Distro's name: docker-desktop-data
Image file: C:\Users\ethan\AppData\Local\Docker\wsl\data\ext4.vhdx
Current size: 1388 MB
The image is not a WSL OS, but a data partition. No size estimation is available at this time.
As you already figured out, the behavior is caused by how WSL2 handles the vhdx file. It is a dynamic growing virtual hard disk file with max size of 256gb by default.
In the pasts I did shrink vhdx files by following a solution like this: How to Shrink a WSL2 Virtual Disk – Stephen Rees-Carter
It could be actually a usefully feature-request to have a button in Docker Desktop that shrinks the used vhdx file.