The docker for mac beta uses all available disk space on the host until there is no more physical disk space available.
Actual behavior
Docker builds fail with: no space left on device when building an image that has a lot of debian deps.
Information
There is ~190GB of disk space available left on this machine.
OS X: version 10.11.3 (build: 15D21)
Docker.app: version v1.11.0-beta7
Running diagnostic tests:
[OK] docker-cli
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160418-163340.tar.gz
Most specific failure is: No error was detected
Your unique id is: A1C0CC09-E182-46E9-9F34-24D665C6D017
OSX 10.11.3
Steps to reproduce the behavior
With the following images downloaded:
REPOSITORY TAG IMAGE ID CREATED SIZE
1f7e45b7579d 16 minutes ago 476.8 MB
proprietary latest ad4fbe550439 3 days ago 1.055 GB
5f060306fc46 3 days ago 125.1 MB
proprietary latest b3ddb0198452 3 days ago 1.083 GB
sputneek latest 1b228fbe9e86 3 days ago 479.5 MB
mongo latest 04f2f0daa7a5 13 days ago 309.8 MB
debian 8 47af6ca8a14a 13 days ago 125.1 MB
debian jessie 47af6ca8a14a 13 days ago 125.1 MB
and 6 containers based on those images. (Which is subjectively not many at all).
Build a new container with meteorjs inside based on debian jessie.
I concur this situation is frustrating and it seems that managing the
volume associated with the VM/VBox is missing required features? Could it
be as
simple as using some sort of āpartedā type tool to increase the size of the
volume?
I wonder if youāre all really running out of disk space or if youāre running out of inodes. See Running out of inodes. To check, run: docker run --rm --privileged debian:jessie df -h
and docker run --rm --privileged debian:jessie df -h -i
You can free up space by purging containers and unused images:
There is still a VM Disk for docker-for-mac located in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2.
Mine was defaulted to 20gigs, anecdotally the same size as the disk copied from Docker-machine default inst.
Iāve expanded the image using qemu-img rezie Docker.qcow2 +5g but
I donāt know how to inform the internal filesystem to consume the extra space.
Iād assume it is some Linux VM in Docker.qcow2 and Iād need to mount it using gpartd and expand the internal FS.
I have a similar issue. It appears my āDocker.qcow2ā file is maxed out at 60gb. Because of this I cannot start my containers. They are all in place, but it does not allow me to start them because it says that Iāve no disk space left.
Looking on the box I see this:
docker:~# df -h Filesystem Size Used Available Use% Mounted on tmpfs 1001.4M 145.4M 856.0M 15% / tmpfs 200.3M 172.0K 200.1M 0% /run dev 10.0M 0 10.0M 0% /dev shm 1001.4M 0 1001.4M 0% /dev/shm cgroup_root 10.0M 0 10.0M 0% /sys/fs/cgroup /dev/vda2 59.0G 22.5G 33.5G 40% /var df: /Mac: Function not implemented df: /var/log: Function not implemented df: /Users: Function not implemented df: /Volumes: Function not implemented df: /tmp: Function not implemented df: /private: Function not implemented /dev/vda2 59.0G 22.5G 33.5G 40% /var/lib/docker/aufs
Is there a way to give more size to the /var/lib/docker/aufs, if I could do that then Iād have enough space to run my images.
Note: once I have all my images again the size goes to 100%. Iām in process of downloading them again, and I expect it to stop at 60G again.
Iām seeing the exact same issue, the āDocker.qcow2ā grew to over 75GB in file size.
Removing the file and restarting Docker beta results in a stable āDocker.qcow2ā file of 1.15GB.
In addition the log files grew to over 45GB, which was probably a direct result of running out of disk space.
Iām facing the same problem right now, my log files have about 60~68gb, itās insaneā¦ Even with docker daemon closed log files continue to increase their size. Any workarounds so far?
Experiencing the same issue on my end. Docker logs filled up my disk (~200GB) with log data in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log. I tried restarting my machine but after a few hours my disk filled up again even without executing any specific docker commands.
As an extra data point, I did not start seeing this issue until i installed the most recent beta update on Thursday (5/26).
Just had the same thing happen for me. 306.3GB of logs in the com.docker.docker/Data/com.docker.driver.amd64-linux directory.
ls -lah
total 373206984
drwxr-xr-x 13 patrick staff 442B 27 May 16:43 .
drwxr-xr-x 21 patrick staff 714B 26 May 18:12 ..
-rw-r--r--@ 1 patrick staff 8.0K 27 May 17:19 .DS_Store
-rw-r--r-- 1 patrick staff 178G 27 May 16:37 Docker.qcow2
-rw-r--r-- 1 patrick staff 64K 27 May 07:47 console-ring
-rw-r--r-- 1 patrick staff 5B 26 May 18:12 hypervisor.pid
-rw-r--r-- 1 patrick staff 0B 14 Apr 15:28 lock
drwxr-xr-x 14 patrick staff 476B 27 May 17:19 log
-rw-r--r-- 1 patrick staff 17B 26 May 18:12 mac.0
-rw-r--r-- 1 patrick staff 36B 14 Apr 15:28 nic1.uuid
-rw-r--r-- 1 patrick staff 5B 26 May 18:12 pid
lrwxr-xr-x 1 patrick staff 12B 26 May 18:12 tty -> /dev/ttys000
-rw-r--r-- 1 patrick staff 1.1K 6 May 14:29 xhyve.args
The log directory specifically:
logs $ ls -lah
total 225043048
drwxr-xr-x 14 patrick staff 476B 27 May 17:19 .
drwxr-xr-x 13 patrick staff 442B 27 May 16:43 ..
-rw-r--r--@ 1 patrick staff 6.0K 27 May 17:19 .DS_Store
-rw-r--r-- 1 patrick staff 0B 14 Apr 15:29 9pudc.log
-rw-r--r-- 1 patrick staff 55M 27 May 16:17 acpid.log
-rw-r--r-- 1 patrick staff 7.0M 27 May 16:17 diagnostics-server.log
-rw-r--r-- 1 patrick staff 9.1G 27 May 16:20 dmesg
-rw------- 1 patrick staff 75G 27 May 16:36 docker.log
-rw-r--r-- 1 patrick staff 188K 27 May 16:38 messages
-rw-r--r-- 1 patrick staff 13M 27 May 16:36 messages.0
-rw-r--r-- 1 patrick staff 1.4M 27 May 16:36 proxy-vsockd.log
-rw------- 1 patrick staff 0B 14 Apr 15:29 transfused.log
-rw-r--r-- 1 patrick staff 24G 27 May 16:36 vsudd.log
-rw-r--r-- 1 patrick staff 0B 14 Apr 15:29 wtmp
Running Docker Version 1.11.1-beta13 (build: 7975) 16dbe555c7dd4304521b21e8286d83fe4864c15c
Docker.qcow2 was growing and growing and growing (about 100MB every 10 seconds). Stopped all docker containers and it was still growing. Eventually I only had 6MB of space left of my Mac. Deleted the qcow2 file and rebooted - to reclaim the 85GB back. Seems ok at the moment.
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: darwin/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 8b63c77
Built: Mon May 23 20:50:37 2016
OS/Arch: linux/amd64
Yeah this is killing my hard disk space - the log files are 50Gb+ and the qcow2 file is about 90Gb, even though all my images combined add up to ~1.5Gb. I installed this a couple of weeks ago and itās only gotten to be a problem today.