Docker Community Forums

Share and learn in the Docker community.

Consistently out of disk space in docker beta

Same issue. more than 100G stolen by logs and Docker.qcow2. Relaunch does not help - logs and qcow2 grows 100M per second. Problem appears after last beta update.

Same here :

I 've got almost 400 Go of logs :hushed: :

~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log

-rw-r–r-- 1 azize staff 70M 26 mai 23:54 acpid.log
-rw-r–r-- 1 azize staff 5,6G 26 mai 23:55 dmesg
-rw------- 1 azize staff 103G 27 mai 00:08 docker.log
-rw-r–r-- 1 azize staff 75K 27 mai 00:08 messages
-rw-r–r-- 1 azize staff 804K 26 mai 14:15 messages.0
-rw-r–r-- 1 azize staff 187K 26 mai 14:30 proxy-vsockd.log
-rw-r–r-- 1 azize staff 244G 26 mai 14:30 vsudd.log
-rw-r–r-- 1 azize staff 0B 26 avr 14:14 wtmp

Information

OS X: version 10.11.5 (build: 15F34)
Docker.app: version v1.11.1-beta13
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x

Same here.

The Version:

Works-MacBook-Pro-4:/ soup$ docker version
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: darwin/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 8b63c77
Built: Mon May 23 20:50:37 2016
OS/Arch: linux/amd64

The files:
Works-MacBook-Pro-4:/ soup$ ls -lah ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/
total 721702816
drwxr-xr-x 10 soup staff 340B May 26 11:11 .
drwxr-xr-x 11 soup staff 374B May 27 15:13 …
-rw-r–r-- 1 soup staff 946M May 27 15:13 acpid.log
-rw-r–r-- 1 soup staff 120G May 27 15:13 dmesg
-rw------- 1 soup staff 181G May 27 15:16 docker.log
-rw-r–r-- 1 soup staff 184K May 27 15:22 messages
-rw-r–r-- 1 soup staff 1.1M May 26 14:11 messages.0
-rw-r–r-- 1 soup staff 1.4M May 27 15:13 proxy-vsockd.log
-rw-r–r-- 1 soup staff 42G May 27 15:13 vsudd.log
-rw-r–r-- 1 soup staff 0B May 11 13:39 wtmp

Works-MacBook-Pro-4:/ soup$ ls -lah ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
total 130324392
drwxr-xr-x 11 soup staff 374B May 27 15:13 .
drwxr-xr-x 18 soup staff 612B May 27 15:13 …
-rw-r–r-- 1 soup staff 62G May 27 15:17 Docker.qcow2
-rw-r–r-- 1 soup staff 64K May 27 15:13 console-ring
-rw-r–r-- 1 soup staff 3B May 27 15:13 hypervisor.pid
-rw-r–r-- 1 soup staff 0B May 11 13:39 lock
drwxr-xr-x 10 soup staff 340B May 26 11:11 log
-rw-r–r-- 1 soup staff 17B May 27 15:13 mac.0
-rw-r–r-- 1 soup staff 36B May 11 13:39 nic1.uuid
-rw-r–r-- 1 soup staff 3B May 27 15:13 pid
lrwxr-xr-x 1 soup staff 12B May 27 15:13 tty -> /dev/ttys000

If you came here about your log files filling the hard-disk, a problem specific to Docker for Mac beta 13, please refer to this thread instead:

1.11.1-beta13 here. Not seeing the log file problem, but still seeing the Docker.qcow2 problem.

The Docker.qcow2 file almost instantly grows to 60 GB when pulling a single image (with a fresh, new, completely blank system).

Ouch, I haven’t even created or deployed a container and this ran away with all of my free disk space within about a week. Steps to reproduce:

  1. Install Docker for Mac Beta Version 1.11.1-beta13 (build: 7975)
  2. Provide an administrative password to complete setup.
  3. Wonder why the fan is constantly running until the system runs out of space.

The major offenders appear to be the following files:

60G ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
70G ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/dmesg
31G ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/docker.log

I assumed the fan was related to corporate antivirus or my VMware virtual machines and didn’t even think to check the Docker beta I’d just installed. The moment I ran out of disk space and VMware notified me my virtual machines were stopping I rushed to track down the culprit.

Hi @bbeaudoin,

Docker Mac Beta Version 1.11.1-beta13.1 (build: 8193) fixes the log size issue.

HTH,
Alexandre

I’m not terribly thrilled about the qcow2 file size either given I don’t have any running containers. Does this version fix that as well?

Mine is stable at 1.1GB (empty, no containers) and then gradually increases as I add containers.

Thank you, I’ve blown away the qcow2 archive and restarted, got a watch on the directory so I can see if this explodes again. Presently at 700M, much better than 60G.

Update: Holding steady at 1.1G (1154678784 bytes) just as expected :grinning:

Glad to see it’s not just me!

proxy-vsockd.log = 48.89 GB
Docker.qcow2 = 25.82 GB

This is without even using Docker in a week after I had deleted these files once before. What is going on?!

1 Like

I was wondering if there were any updates on this. Right now my only fix to recover the free space is to wipe the Docker.qcow2 and let the app re-create it.

3 question i have is:

1 Is there a way to specify a size for the qcow file to be larger then 64G. That’s too small for my use cases and usually run out of space after a few weeks.
2 My previous pattern to recover free space was using this script:

#!/usr/bin/env bash # remove untagged images docker stop $(docker ps -aq) docker rm $(docker ps -aq) docker rmi $(docker images --filter dangling=true -q) # remove unused volumes # remove stopped + exited containers, I skip Exit 0 as I have old scripts using data containers. docker rm -v $(docker ps -a | grep "Exit [1-255]" | awk '{ print $1 }')
Is there a better way of doing this or is delete the image file and resetting the only valid solution atm?

3 Is there a way to ssh into the VM? that’s running the linux kernel? Or is that abstracted away ?

Experiencing same issue here on 1.11.2-beta15 - no space left on device, even after wiping all of the images.

Diagnostic ID 6B759845-F512-4581-8B47-0D9A0C12F45C

Thanks. You saved me a ton of time here. I hope that Docker provides a more convenient way to grow the qcow2 image in the future.

This can happen just from pulling a lot of images and using a lot of containers. It seems the original qcow disk image was smaller. Latest download I ended up with a 64G. Others have some parts of thisin the thread, but here’s an expansion procedure that Works For Me ™. This uses no additional tools outside of the docker beta for osx download.

First, I’d clean up exited containers and images:
docker rm -v $(docker ps -a -q -f status=exited)
docker rmi $(docker images -f “dangling=true” -q)

If that doesn’t clean up enough, you might consider expanding the disk. This is potentially destructive, so make a copy of your cow2 file first:

  1. use qemu to see current disk info, then add 10GB
    export DOCKER_DISK=~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.cow2
    cp $DOCKER_DISK $DOCKER_DISK.backup
    /Applications/Docker.app/Contents/MacOS/qemu-img info ${DOCKER_DISK}
    /Applications/Docker.app/Contents/MacOS/qemu-img resize ${DOCKER_DISK} +10G

  2. Restart docker from the panel or use connect to vm below to connect and reboot it.

  3. Connect to the VM:
    screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
    Press enter if your screen is blank. Login is root/no password

  4. make sure that the new disk space is reflected
    fdisk -l
    make sure that your main linux partition is the second one. We’re going to delete it and re-create it using the remainder of the disk

  5. Now the scary part :wink: Remember that backup? Here we are deleting the partition and re-adding with a larger size. The data should remain intact.

fdisk /dev/vda
key presses, in sequence
p
d
2
n
2
enter
enter
w
reboot

At this point you can reconnect to verify the space is there, and do docker images to ensure everything is still there. If not, restore the backup.

@b2jrock could you explain what the “scary part” does in detail? I got stuck at the second “2” as there was no such option…

Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): 2
Value out of range.
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):

It could be your partition layout is different than mine. The first ‘p’ should have output your partition table. Mine looks like:

Device Boot Start End Sectors Size Id Type
/dev/vda1 2048 8196095 8194048 3.9G 82 Linux swap / Solaris
/dev/vda2 8196096 134217727 126021632 60.1G 83 Linux

So in my case I was deleting and re-adding the second (2) partition, but extending it to the end of the expanded disk.

1 Like

This is becoming really really annoying.

I think at the very least you should expose a config setting where one can set the directory location to another disk.

And even better would be if the big images files were actually hosted and visible on the Mac, so one can manually remove a file or two when the docker machine doesn’t start up again. So basically mount the files into the docker machine like a shared file system.

If that is technically not possible, then mount the directory where the big files are on the docker machine to a directory on the mac host, so if and when the docker starts up again, one can go and remove files that are no longer required, without having to throw away the whole gcow2 file and start all downloads all over again.

3 Likes

Is there a viable workaround or solution this?? I am running into disk space limit as well and it’s capped at 18GB which is pretty small considering some of the images can be rather large.

Version 1.12.1-rc1-beta23 (build: 11375)
2f0427ac7d4d47c705934ae141c3a248ed7fff40

El Capitan 10.11.5

1 Like

is there no way to use a specified (external) disk for the space-consuming files? This is INCREDIBLY frustrating on a laptop with limited space.

1 Like