Docker Community Forums

Share and learn in the Docker community.

Log files grow unconditionally, will eat all the disk space


(Mgtsai) #1

Expected behavior

Actual behavior

Log files under ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log/ grow unconditionally, ex: dmesg, docker.log (several GBs), and vsudd.log (near 1 GB)

Information

Docker for Mac Beta

Steps to reproduce the behavior

After running several days/weeks, log files will grow as large as you can image


Disk full due to docker logs
Docker.log filling hard drive
Consistently out of disk space in docker beta
(Rwpcm) #2

Same thing for me, in the Library/Containers//com.docker.docker/Data/com.docker.driver.amd64-linux/log directory, files have bloated (in particular docker.log to 60 GB).
It happened in the space of ca. 1 week, with a low usage of Docker during the period.
Running on Mac OSX MacBook Air El Capitan 10.11.4


(Gustavostor) #3

Same thing here. Apparently it happened in just 1 day. I checked my disk space yesterday and it was ok, a few hours later it had ~30GB less. After digging around I found out docker.log was eating 32GB.


(Stephenhsu) #4

Yeah, experiencing the same issue.

The disk keeps decreasing rapidly. It takes me a long time to find this root cause.

Is it safe to just remove the log folder and create an empty one ?


(Mikesir87) #5

Am also seeing this. I stopped Docker and removed the docker.log and dmesg logs (the two that were largest) and restarted. Everything came back up as expected.


(Alexandre) #6

–EDIT: removed redirection to another ticket. I wrongfully picked a ticket that I thought related, but it wasn’t.


(Alexandre) #7

This issue is new to beta 13, as far as I know.

At about 3PM, my time, I stopped Docked, deleted Docker.qcow2 and removed the log folder. They were about 95GB and 65GB, respectively.

Now at 5PM (two hours later), my hard drive is again filled.

$ pwd
/Users/myuser/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux
$ du -sh *
60G Docker.qcow2
64K console-ring
0B lock
95G log
4.0K mac.0

Note: I have no idea why my Docker.qcow2 is so big, since I deleted it just two hours ago and now only have one image stocked and no containers running:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest 13e1761bf172 2 weeks ago 4.797 MB

Diagnose data:
pinata diagnose -u
OS X: version 10.11.4 (build: 15E65)
Docker.app: version v1.11.1-beta13
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x


(Mike Splain) #8

Yeah I’m having the same issues… filled up about 150GB of data on my Mac…


(Dataich) #9

Same issue for me…

<img src="/uploads/default/original/2X/8/86f1016e15e74be9a074d6167828d0005017c858.png" width=“690” height=“237”>


(Imwithsam) #10

Similar issue here on my MacBook Pro running OS 10.11.4 upon upgrading to Docker for Mac 1.11.1-beta13. Ate up all (500G) of my free space. Ended up deleting ~/Library/Containers/com.docker.* to resuscitate my computer


(Dave Henderson) #11

Yep - I’m noticing the same behaviour as this.

I deleted all the logs yesterday, and today (after running a couple containers and pulling one image) my dmesg is 6.0GBs. It’s pretty long:

$ wc -l dmesg
97687872 dmesg

I usually run with a pretty full disk, so I would’ve noticed this before. I think this is something new with the 1.11.1-beta13.

FWIW, my Docker.qcow2 is 25GB today, whereas it was 20GB yesterday. That’s not too alarming, as the virtual size of the disk is 60GBs so it seems reasonable that it’ll grow.

From the contents of dmesg, nothing jumps out at me except it appears the VM is rebooting a lot. A little odd is that the uptime command reports that the VM’s been up for ~10hrs, whereas dmesg was last touched 5 hours ago (with a +3s timestamp). So there’s something fishy going on there…


(Karthik Gaekwad) #12

Got the same issue as well, and ran out of space today.

197G /Users//karthik/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log 257G /Users//karthik/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux

Removing dmesg/docker.log and Docker.qcow2 to get my machine back.


(Sean Chitwood) #13

So, I’m seeing the same issue, I didn’t notice it before now but it may have been in previous Betas.

Between 11 AM and 1 PM all I did was ‘docker pull java’, even if it logged the entire image I wouldn’t think it would take 100G.

168G /Users/sechitwood/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log
total 353213640
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 7.9G May 27 10:29 acpid.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 97K May 27 10:29 dmesg
-rw------- 1 sechitwood GROUPON\Domain Users 158G May 27 11:06 docker.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 2.1K May 27 11:06 messages
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 229K May 27 10:18 messages.0
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 91M May 27 10:18 proxy-vsockd.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 2.9G May 27 10:19 vsudd.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 0B May 24 10:15 wtmp
total 591815496
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 32G May 27 12:53 acpid.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 387K May 27 12:53 dmesg
-rw------- 1 sechitwood GROUPON\Domain Users 248G May 27 13:25 docker.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 5.4K May 27 13:15 messages
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 229K May 27 10:18 messages.0
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 91M May 27 10:18 proxy-vsockd.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 2.9G May 27 10:19 vsudd.log
-rw-r–r-- 1 sechitwood GROUPON\Domain Users 0B May 24 10:15 wtmp


(David Sheets) #14

Hi everybody,

You are experiencing an unfortunate combination of 2 minor issues that, together, cause an exponential (!) increase in log size, slow, high CPU-use start up times and general sadness. We are working on solving this issue ASAP and will have more to share with you soon. I’m sorry this experience is painful – thanks for bearing with us as we learn from our mistakes.

Thanks, again, for your help. Your understanding regarding this issue is really amazing and unexpected. :slight_smile:

Watch this space for updates.

Thanks,

David for the Docker for Mac team

Edit: This issue should now be fixed with Beta 13.1 hotfix. See post below and Beta 13.1 changelog window during update for more details.


(Dickson) #15

Thanks for the update. What’s the recommended interim solution? Is it safe to remove the log file?


(David Sheets) #16

Hi Dickson,

It is safe to remove the log files when Docker.app isn’t running. It is probably fine to remove it while it is running as well but we haven’t tested this scenario.

Thanks,

David


(Twhid) #17

Is it safe to remove Docker.qcow2 ? It’s 99GB on my machine.


(David Sheets) #18

Docker.qcow2 is a sparse block device image that contains the contents of the copy-on-write file system (AUFS over ext4 right now) which contains your images, containers, and volume driver volumes. It is safe to delete if you have no irreplaceable data stored in images, containers, and volume driver volumes.

If you have data you would like to retain, I recommend running containers which export that data over a -v bind mount to OS X before deleting Docker.qcow2.


(John Santiago Jr.) #19

Since update noticed the same issue, What is recommended solution?


(Sean Kane) #20

I just uninstalled the beta and instantly freed up 103GB on my Macbook Pro SSD. This is crazy huge since I have only actively used the beta for maybe 10 images and 5 or so containers.

At some point today it actually looked like it might have been updated the qcow file or something (I could not determine the cause at the time) in the background (while I was not using docker) as my disk space was being devoured quickly. I could watch it counting down to zero.

Things are stable now, after the uninstall. I’d like to get it back on, but I’ll need to hold off for a bit, as I can’t sacrifice that much disk space at the moment, to this.