Consistently out of disk space in docker beta

Mine is stable at 1.1GB (empty, no containers) and then gradually increases as I add containers.

Thank you, I’ve blown away the qcow2 archive and restarted, got a watch on the directory so I can see if this explodes again. Presently at 700M, much better than 60G.

Update: Holding steady at 1.1G (1154678784 bytes) just as expected :grinning:

Glad to see it’s not just me!

proxy-vsockd.log = 48.89 GB
Docker.qcow2 = 25.82 GB

This is without even using Docker in a week after I had deleted these files once before. What is going on?!

I was wondering if there were any updates on this. Right now my only fix to recover the free space is to wipe the Docker.qcow2 and let the app re-create it.

3 question i have is:

1 Is there a way to specify a size for the qcow file to be larger then 64G. That’s too small for my use cases and usually run out of space after a few weeks.
2 My previous pattern to recover free space was using this script:

#!/usr/bin/env bash # remove untagged images docker stop $(docker ps -aq) docker rm $(docker ps -aq) docker rmi $(docker images --filter dangling=true -q) # remove unused volumes # remove stopped + exited containers, I skip Exit 0 as I have old scripts using data containers. docker rm -v $(docker ps -a | grep "Exit [1-255]" | awk '{ print $1 }')
Is there a better way of doing this or is delete the image file and resetting the only valid solution atm?

3 Is there a way to ssh into the VM? that’s running the linux kernel? Or is that abstracted away ?

Experiencing same issue here on 1.11.2-beta15 - no space left on device, even after wiping all of the images.

Diagnostic ID 6B759845-F512-4581-8B47-0D9A0C12F45C

Thanks. You saved me a ton of time here. I hope that Docker provides a more convenient way to grow the qcow2 image in the future.

This can happen just from pulling a lot of images and using a lot of containers. It seems the original qcow disk image was smaller. Latest download I ended up with a 64G. Others have some parts of thisin the thread, but here’s an expansion procedure that Works For Me ™. This uses no additional tools outside of the docker beta for osx download.

First, I’d clean up exited containers and images:
docker rm -v $(docker ps -a -q -f status=exited)
docker rmi $(docker images -f “dangling=true” -q)

If that doesn’t clean up enough, you might consider expanding the disk. This is potentially destructive, so make a copy of your cow2 file first:

  1. use qemu to see current disk info, then add 10GB
    export DOCKER_DISK=~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.cow2
    cp $DOCKER_DISK $DOCKER_DISK.backup
    /Applications/Docker.app/Contents/MacOS/qemu-img info ${DOCKER_DISK}
    /Applications/Docker.app/Contents/MacOS/qemu-img resize ${DOCKER_DISK} +10G

  2. Restart docker from the panel or use connect to vm below to connect and reboot it.

  3. Connect to the VM:
    screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
    Press enter if your screen is blank. Login is root/no password

  4. make sure that the new disk space is reflected
    fdisk -l
    make sure that your main linux partition is the second one. We’re going to delete it and re-create it using the remainder of the disk

  5. Now the scary part :wink: Remember that backup? Here we are deleting the partition and re-adding with a larger size. The data should remain intact.

fdisk /dev/vda
key presses, in sequence
p
d
2
n
2
enter
enter
w
reboot

At this point you can reconnect to verify the space is there, and do docker images to ensure everything is still there. If not, restore the backup.

@b2jrock could you explain what the “scary part” does in detail? I got stuck at the second “2” as there was no such option…

Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): 2
Value out of range.
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):

It could be your partition layout is different than mine. The first ‘p’ should have output your partition table. Mine looks like:

Device Boot Start End Sectors Size Id Type
/dev/vda1 2048 8196095 8194048 3.9G 82 Linux swap / Solaris
/dev/vda2 8196096 134217727 126021632 60.1G 83 Linux

So in my case I was deleting and re-adding the second (2) partition, but extending it to the end of the expanded disk.

1 Like

This is becoming really really annoying.

I think at the very least you should expose a config setting where one can set the directory location to another disk.

And even better would be if the big images files were actually hosted and visible on the Mac, so one can manually remove a file or two when the docker machine doesn’t start up again. So basically mount the files into the docker machine like a shared file system.

If that is technically not possible, then mount the directory where the big files are on the docker machine to a directory on the mac host, so if and when the docker starts up again, one can go and remove files that are no longer required, without having to throw away the whole gcow2 file and start all downloads all over again.

3 Likes

Is there a viable workaround or solution this?? I am running into disk space limit as well and it’s capped at 18GB which is pretty small considering some of the images can be rather large.

Version 1.12.1-rc1-beta23 (build: 11375)
2f0427ac7d4d47c705934ae141c3a248ed7fff40

El Capitan 10.11.5

1 Like

is there no way to use a specified (external) disk for the space-consuming files? This is INCREDIBLY frustrating on a laptop with limited space.

1 Like

I’m experiencing same problem, can’t launch any docker images because of that, this is quite annoying, are there any plans to fix this?

1 Like

For me deleting the file Docker.qcow2 in /Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux and restarting docker solved the issue, but bear in mind that this will remove all your containers and images.

Our entire team uses docker and we use it to run integration tests on jenkins and locally on our environments. We constantly run out of space because of this issue :frowning: Hope it will be resolved in not very distant future. Right now we keep deleting the Docker.qcow2 file.

2 Likes

Have you been able to find anything about this? I really want to be able to move my docker images to an external disk as well. I could do this with docker-machine, but can’t find anything about this on Docker for Mac.

1 Like

Bump. This is still an active issue for me with the most updated Docker Beta version.

1 Like

About out of disk space, has anyone even seen something like this?

We are using Docker 1.12.1 on our Jenkins slaves vApps in our CI infrastructure.
We have a problem. Could you support us to explain what’s happen?
Thanks in advance.

Our main question is to know why docker-compose creates and keep images with that strange name, concatenating previous to the base image name such set of Ids.
Problem Description:

Jenkins job
https://fem108-eiffel004.lmera.ericsson.se:8443/jenkins/job/eniq-export-transformation-provider_Verify_docker/
suffer of really frequent container name corruption (1 every 4 run).

docker ps show weird container (holding resource) like :
ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_docker_jboss_1
or
8b2b1935d74e_8b2b1935d74e_docker_jboss_1

and removing those containers jobs is working fine.

Logs with faulty container are attached.

Made a preliminary inspection on faulty Vapp (Jenkins_Fem108_Docker_Slave_12):

[root@atvts3354 docker]# docker-compose ps
Name Command State Ports

8b2b1935d74e_8b2b1935d74e_8b2b1935d74e_docker_jboss_1 /bin/sh -c sleep 20 && cp … Exit 128
docker_dps_integration_1 /bin/sh -c /usr/sbin/xinet … Up 5019/tcp
docker_jboss_1 /bin/sh -c cp /opt/ericsso … Exit 128
docker_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp

jboss has 2 different “images” one looks corrupted as in description, in addition also State Exit 128 could be investigated.

I am seeing the same behavior

1 Like

Same issue here. Does anyone found a solution for this? Or at least change the location where the images can be stored?