Consistently out of disk space in docker beta

Expected behavior

The docker for mac beta uses all available disk space on the host until there is no more physical disk space available.

Actual behavior

Docker builds fail with: no space left on device when building an image that has a lot of debian deps.

Information

There is ~190GB of disk space available left on this machine.

OS X: version 10.11.3 (build: 15D21)
Docker.app: version v1.11.0-beta7
Running diagnostic tests:
[OK] docker-cli
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Docker logs are being collected into /tmp/20160418-163340.tar.gz
Most specific failure is: No error was detected
Your unique id is: A1C0CC09-E182-46E9-9F34-24D665C6D017

  • OSX 10.11.3

Steps to reproduce the behavior

With the following images downloaded:
REPOSITORY TAG IMAGE ID CREATED SIZE
1f7e45b7579d 16 minutes ago 476.8 MB
proprietary latest ad4fbe550439 3 days ago 1.055 GB
5f060306fc46 3 days ago 125.1 MB
proprietary latest b3ddb0198452 3 days ago 1.083 GB
sputneek latest 1b228fbe9e86 3 days ago 479.5 MB
mongo latest 04f2f0daa7a5 13 days ago 309.8 MB
debian 8 47af6ca8a14a 13 days ago 125.1 MB
debian jessie 47af6ca8a14a 13 days ago 125.1 MB

and 6 containers based on those images. (Which is subjectively not many at all).

Build a new container with meteorjs inside based on debian jessie.

8 Likes

Yeah, this is killing me too :disappointed:

I note a bunch of people in this issue have the same problem: Where does Docker keep images/containers so i can better track my disk usage

I concur this situation is frustrating and it seems that managing the
volume associated with the VM/VBox is missing required features? Could it
be as
simple as using some sort of ā€˜partedā€™ type tool to increase the size of the
volume?

I wonder if youā€™re all really running out of disk space or if youā€™re running out of inodes. See Running out of inodes. To check, run:
docker run --rm --privileged debian:jessie df -h
and
docker run --rm --privileged debian:jessie df -h -i

You can free up space by purging containers and unused images:

docker ps -q -a -f status=exited | xargs -n 100 docker rm -v docker images -q --filter "dangling=true" | xargs -n 100 docker rmi

And this frees up inodes, though it can take several minutes to run:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes

There is still a VM Disk for docker-for-mac located in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2.

Mine was defaulted to 20gigs, anecdotally the same size as the disk copied from Docker-machine default inst.

Iā€™ve expanded the image using qemu-img rezie Docker.qcow2 +5g but
I donā€™t know how to inform the internal filesystem to consume the extra space.
Iā€™d assume it is some Linux VM in Docker.qcow2 and Iā€™d need to mount it using gpartd and expand the internal FS.

Unfortunately you canā€™t currently reclaim the space like that:

$ df -h
Filesystem      Size   Used  Avail Capacity  iused ifree %iused  Mounted on
/dev/disk1     112Gi  111Gi  330Mi   100% 29237368 84358  100%   /
...

Then:

$ docker rm $(docker ps -aq)
...
$ docker rmi $(docker images -q)
...

And still:

$ df -h
Filesystem      Size   Used  Avail Capacity  iused ifree %iused  Mounted on
/dev/disk1     112Gi  111Gi  329Mi   100% 29237470 84256  100%   /
...

And the problem is the sparse file:

$ ls -lh ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
-rw-r--r--  1 amouat  staff   9.3G 25 Apr 15:15 /Users/amouat/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2

The only fix I know of is to delete the file and start again, which obviously wipes out all your images and containers :frowning:

2 Likes

My problem too, deleting the ā€œDocker.qcow2ā€ ist the only way to get my builds running again. Then wait, till the 20 Gig border is hit again ā€¦

I have a similar issue. It appears my ā€œDocker.qcow2ā€ file is maxed out at 60gb. Because of this I cannot start my containers. They are all in place, but it does not allow me to start them because it says that Iā€™ve no disk space left.

Looking on the box I see this:

docker:~# df -h Filesystem Size Used Available Use% Mounted on tmpfs 1001.4M 145.4M 856.0M 15% / tmpfs 200.3M 172.0K 200.1M 0% /run dev 10.0M 0 10.0M 0% /dev shm 1001.4M 0 1001.4M 0% /dev/shm cgroup_root 10.0M 0 10.0M 0% /sys/fs/cgroup /dev/vda2 59.0G 22.5G 33.5G 40% /var df: /Mac: Function not implemented df: /var/log: Function not implemented df: /Users: Function not implemented df: /Volumes: Function not implemented df: /tmp: Function not implemented df: /private: Function not implemented /dev/vda2 59.0G 22.5G 33.5G 40% /var/lib/docker/aufs

Is there a way to give more size to the /var/lib/docker/aufs, if I could do that then Iā€™d have enough space to run my images.

Note: once I have all my images again the size goes to 100%. Iā€™m in process of downloading them again, and I expect it to stop at 60G again.

1 Like

I had 20gigs initially ( when I converted from Boot2Docker+Vbox)
If I recreated the image it seems to have a max size of 64g.

Did some more diggingā€¦

I installed qemu

brew install qemu

I can use qemuā€™s tools to manipulate the imageā€¦

$ qemu-img info Docker.qcow2
image: Docker.qcow2
file format: qcow2
virtual size: 64G (68719476736 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
  compat: 1.1
  lazy refcounts: true
   refcount bits: 16
   corrupt: false

Add 5 more gigs

 qemu-img resize Docker.qcow2 +5g

See size increase from 65g -> 69g

qemu-img info Docker.qcow2      
image: Docker.qcow2
file format: qcow2
virtual size: 69G (74088185856 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
  compat: 1.1
  lazy refcounts: true
  refcount bits: 16
  corrupt: false

Download gpartd iso from http://gparted.org/download.php

Run Qemu with disk image + gparted and resize the FS ā€¦ ( Should see 5g unallocated space )

qemu-system-x86_64 -drive file=Docker.qcow2  -m 512 -cdrom ~/Downloads/gparted-live-0.25.0-3-i686.iso -boot d -device usb-mouse -usb
5 Likes

Iā€™m seeing the exact same issue, the ā€œDocker.qcow2ā€ grew to over 75GB in file size.
Removing the file and restarting Docker beta results in a stable ā€œDocker.qcow2ā€ file of 1.15GB.

In addition the log files grew to over 45GB, which was probably a direct result of running out of disk space.

I have not been running any containers, yet my log/dmesg file was 250+G. Iā€™ll keep an eye on the how much space it is taking up.

All I have done with docker since installing the beta was pulled down an image, tagged it and pushed it back up.

 $ ls -lh
total 116983344
-rw-r--r--  1 sechitwood  GROUPON\Domain Users   2.0G May 26 15:26 acpid.log
-rw-r--r--  1 sechitwood  GROUPON\Domain Users    24K May 26 15:26 dmesg
-rw-------  1 sechitwood  GROUPON\Domain Users    52G May 26 15:26 docker.log
-rw-r--r--  1 sechitwood  GROUPON\Domain Users   113K May 26 15:30 messages
-rw-r--r--  1 sechitwood  GROUPON\Domain Users    13M May 26 11:47 messages.0
-rw-r--r--  1 sechitwood  GROUPON\Domain Users    46M May 26 15:26 proxy-vsockd.log
-rw-r--r--  1 sechitwood  GROUPON\Domain Users   1.5G May 26 15:26 vsudd.log
-rw-r--r--  1 sechitwood  GROUPON\Domain Users     0B May 24 10:15 wtmp

Iā€™m facing the same problem right now, my log files have about 60~68gb, itā€™s insaneā€¦ Even with docker daemon closed log files continue to increase their size. Any workarounds so far?

Same issue here.

Highlights:

-rw-r--r--   1 Qcho  staff    19G May 26 23:39 Docker.qcow2
-rw-------   1 Qcho  staff   1.4G May 26 23:35 docker.log
-rw-r--r--   1 Qcho  staff    20G May 26 23:39 vsudd.log

Full directory:

$ ll *
-rw-r--r--  1 Qcho  staff    19G May 26 23:39 Docker.qcow2
-rw-r--r--  1 Qcho  staff    64K May 26 14:50 console-ring
-rw-r--r--  1 Qcho  staff     5B May 25 17:20 hypervisor.pid
-rw-r--r--  1 Qcho  staff     0B Apr 22 10:16 lock
-rw-r--r--  1 Qcho  staff    17B May 25 17:20 mac.0
-rw-r--r--  1 Qcho  staff    36B Apr 22 10:16 nic1.uuid
-rw-r--r--  1 Qcho  staff    36B Apr 22 10:16 nic2.uuid
-rw-r--r--  1 Qcho  staff     5B May 25 17:20 pid
lrwxr-xr-x  1 Qcho  staff    12B May 25 17:20 tty -> /dev/ttys001
-rw-r--r--  1 Qcho  staff   1.1K May  3 23:09 xhyve.args

log:
total 45137816
drwxr-xr-x  10 Qcho  staff   340B May 26 23:34 .
drwxr-xr-x  13 Qcho  staff   442B May 25 17:20 ..
-rw-r--r--   1 Qcho  staff   1.3M May 26 23:35 acpid.log
-rw-r--r--   1 Qcho  staff   159M May 26 23:35 dmesg
-rw-------   1 Qcho  staff   1.4G May 26 23:35 docker.log
-rw-r--r--   1 Qcho  staff    57K May 27 00:00 messages
-rw-r--r--   1 Qcho  staff    17M May 26 23:35 messages.0
-rw-r--r--   1 Qcho  staff    23K May 26 23:35 proxy-vsockd.log
-rw-r--r--   1 Qcho  staff    20G May 26 23:39 vsudd.log
-rw-r--r--   1 Qcho  staff     0B Apr 22 10:16 wtmp

Experiencing the same issue on my end. Docker logs filled up my disk (~200GB) with log data in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log. I tried restarting my machine but after a few hours my disk filled up again even without executing any specific docker commands.

As an extra data point, I did not start seeing this issue until i installed the most recent beta update on Thursday (5/26).

Thanks!

Just had the same thing happen for me. 306.3GB of logs in the com.docker.docker/Data/com.docker.driver.amd64-linux directory.

ls -lah
total 373206984
drwxr-xr-x  13 patrick  staff   442B 27 May 16:43 .
drwxr-xr-x  21 patrick  staff   714B 26 May 18:12 ..
-rw-r--r--@  1 patrick  staff   8.0K 27 May 17:19 .DS_Store
-rw-r--r--   1 patrick  staff   178G 27 May 16:37 Docker.qcow2
-rw-r--r--   1 patrick  staff    64K 27 May 07:47 console-ring
-rw-r--r--   1 patrick  staff     5B 26 May 18:12 hypervisor.pid
-rw-r--r--   1 patrick  staff     0B 14 Apr 15:28 lock
drwxr-xr-x  14 patrick  staff   476B 27 May 17:19 log
-rw-r--r--   1 patrick  staff    17B 26 May 18:12 mac.0
-rw-r--r--   1 patrick  staff    36B 14 Apr 15:28 nic1.uuid
-rw-r--r--   1 patrick  staff     5B 26 May 18:12 pid
lrwxr-xr-x   1 patrick  staff    12B 26 May 18:12 tty -> /dev/ttys000
-rw-r--r--   1 patrick  staff   1.1K  6 May 14:29 xhyve.args

The log directory specifically:

logs $ ls -lah
total 225043048
drwxr-xr-x  14 patrick  staff   476B 27 May 17:19 .
drwxr-xr-x  13 patrick  staff   442B 27 May 16:43 ..
-rw-r--r--@  1 patrick  staff   6.0K 27 May 17:19 .DS_Store
-rw-r--r--   1 patrick  staff     0B 14 Apr 15:29 9pudc.log
-rw-r--r--   1 patrick  staff    55M 27 May 16:17 acpid.log
-rw-r--r--   1 patrick  staff   7.0M 27 May 16:17 diagnostics-server.log
-rw-r--r--   1 patrick  staff   9.1G 27 May 16:20 dmesg
-rw-------   1 patrick  staff    75G 27 May 16:36 docker.log
-rw-r--r--   1 patrick  staff   188K 27 May 16:38 messages
-rw-r--r--   1 patrick  staff    13M 27 May 16:36 messages.0
-rw-r--r--   1 patrick  staff   1.4M 27 May 16:36 proxy-vsockd.log
-rw-------   1 patrick  staff     0B 14 Apr 15:29 transfused.log
-rw-r--r--   1 patrick  staff    24G 27 May 16:36 vsudd.log
-rw-r--r--   1 patrick  staff     0B 14 Apr 15:29 wtmp

Running Docker
Version 1.11.1-beta13 (build: 7975) 16dbe555c7dd4304521b21e8286d83fe4864c15c

MacBook Pro (Retina, 13-inch, Mid 2014)
16 GB 1600 MHz DDR3

Similar to the above poster it only seemed to happen after installing the latest update, but Iā€™m not certain of that.

Same problem for me.
docker.log to 70Go

Version 1.11.1-beta13 (build: 7975)
16dbe555c7dd4304521b21e8286d83fe4864c15c

MacBook Pro (Retina, 15 pouces, mi-2014)
Version 10.11.5 (15F34)

Same problem for me too.

Docker.qcow2 was growing and growing and growing (about 100MB every 10 seconds). Stopped all docker containers and it was still growing. Eventually I only had 6MB of space left of my Mac. Deleted the qcow2 file and rebooted - to reclaim the 85GB back. Seems ok at the moment.

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: darwin/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 8b63c77
Built: Mon May 23 20:50:37 2016
OS/Arch: linux/amd64

Experiencing the same issue on my Mac.

The log folder is up to 25G, which is increasing rapidly. These log files grow unconditionally, will eat all the disk space.

Anyone has got a solution ?

Yeah this is killing my hard disk space - the log files are 50Gb+ and the qcow2 file is about 90Gb, even though all my images combined add up to ~1.5Gb. I installed this a couple of weeks ago and itā€™s only gotten to be a problem today.

I used docker beta for a couple of weeks, but logs filling occurred only from latest beta.

log/vsudd.log

2016/05/20 20:45:30 54 Done. read: 381 written: 141
2016/05/20 20:45:30 55 Accepted connection on fd 15 from 00000002.00010009
2016/05/20 20:45:30 55 Done. read: 380 written: 1501
2016/05/20 20:45:30 56 Accepted connection on fd 14 from 00000002.00010009
2016/05/20 20:45:30 56 Done. read: 253 written: 4661
2016/05/20 20:45:30 57 Accepted connection on fd 15 from 00000002.00010009
2016/05/20 20:45:30 57 Done. read: 192 written: 211
2016/05/20 20:45:31 58 Accepted connection on fd 14 from 00000002.00010009

i noticed this appears related to network changes (switching proxy and/or bad connectivity)