Consistently out of disk space in docker beta

It could be your partition layout is different than mine. The first ‘p’ should have output your partition table. Mine looks like:

Device Boot Start End Sectors Size Id Type
/dev/vda1 2048 8196095 8194048 3.9G 82 Linux swap / Solaris
/dev/vda2 8196096 134217727 126021632 60.1G 83 Linux

So in my case I was deleting and re-adding the second (2) partition, but extending it to the end of the expanded disk.

1 Like

This is becoming really really annoying.

I think at the very least you should expose a config setting where one can set the directory location to another disk.

And even better would be if the big images files were actually hosted and visible on the Mac, so one can manually remove a file or two when the docker machine doesn’t start up again. So basically mount the files into the docker machine like a shared file system.

If that is technically not possible, then mount the directory where the big files are on the docker machine to a directory on the mac host, so if and when the docker starts up again, one can go and remove files that are no longer required, without having to throw away the whole gcow2 file and start all downloads all over again.

3 Likes

Is there a viable workaround or solution this?? I am running into disk space limit as well and it’s capped at 18GB which is pretty small considering some of the images can be rather large.

Version 1.12.1-rc1-beta23 (build: 11375)
2f0427ac7d4d47c705934ae141c3a248ed7fff40

El Capitan 10.11.5

1 Like

is there no way to use a specified (external) disk for the space-consuming files? This is INCREDIBLY frustrating on a laptop with limited space.

1 Like

I’m experiencing same problem, can’t launch any docker images because of that, this is quite annoying, are there any plans to fix this?

1 Like

For me deleting the file Docker.qcow2 in /Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux and restarting docker solved the issue, but bear in mind that this will remove all your containers and images.

Our entire team uses docker and we use it to run integration tests on jenkins and locally on our environments. We constantly run out of space because of this issue :frowning: Hope it will be resolved in not very distant future. Right now we keep deleting the Docker.qcow2 file.

2 Likes

Have you been able to find anything about this? I really want to be able to move my docker images to an external disk as well. I could do this with docker-machine, but can’t find anything about this on Docker for Mac.

1 Like

Bump. This is still an active issue for me with the most updated Docker Beta version.

1 Like

About out of disk space, has anyone even seen something like this?

We are using Docker 1.12.1 on our Jenkins slaves vApps in our CI infrastructure.
We have a problem. Could you support us to explain what’s happen?
Thanks in advance.

Our main question is to know why docker-compose creates and keep images with that strange name, concatenating previous to the base image name such set of Ids.
Problem Description:

Jenkins job
https://fem108-eiffel004.lmera.ericsson.se:8443/jenkins/job/eniq-export-transformation-provider_Verify_docker/
suffer of really frequent container name corruption (1 every 4 run).

docker ps show weird container (holding resource) like :
ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_docker_jboss_1
or
8b2b1935d74e_8b2b1935d74e_docker_jboss_1

and removing those containers jobs is working fine.

Logs with faulty container are attached.

Made a preliminary inspection on faulty Vapp (Jenkins_Fem108_Docker_Slave_12):

[root@atvts3354 docker]# docker-compose ps
Name Command State Ports

8b2b1935d74e_8b2b1935d74e_8b2b1935d74e_docker_jboss_1 /bin/sh -c sleep 20 && cp … Exit 128
docker_dps_integration_1 /bin/sh -c /usr/sbin/xinet … Up 5019/tcp
docker_jboss_1 /bin/sh -c cp /opt/ericsso … Exit 128
docker_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp

jboss has 2 different “images” one looks corrupted as in description, in addition also State Exit 128 could be investigated.

I am seeing the same behavior

1 Like

Same issue here. Does anyone found a solution for this? Or at least change the location where the images can be stored?

I occasionally see that when you re-build the image name. So the tag references is lost and that container is now labeled by the former hashID_directory_location_N rather then say image_name_1.

This would be my guess though that has nothing to do with the out of space topic.

Same issue, we need a solid solution

I too am experiencing this issue. Has anyone nailed down exactly what in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 is growing?

Looks like this is still an issue, but on the 1.13.0-rc7 beta the qcow2 file caps at 60GB on my 1TB host. Was wondering if there’s at least a setting one would configure to specify what’s the MAX the file should grow to given it doesn’t seem to be growing indefinitely like most people reported on previous betas?

@robertoandrade, I’ve just installed 1.13 and have noticed that the procedure to grow the qcow file as specified further up in the thread no longer works.

I’ve started a build in my environment and am hoping (:slight_smile:) that the qcow will continue to grow.

For my use case I need the qcow to be at least 200GB

This issue reared it’s head again for me in the last few days. Im on the beta channel. Switching to stable to see if it solves the issue.

For growing the qcow volume total space see https://github.com/docker/for-mac/issues/371#issuecomment-262826610 (the qcow location is now configurable in the Preferences of the app.

For shrinking the qcow file image size see https://github.com/docker/for-mac/issues/371#issuecomment-265525709 which is now in stable as of 1.13. Compaction is performed on shutdown and TRIM is done every 15m (unless you execute it manually, then it’s done immediately).

2 Likes

This has been addressed for me. I do have some images that are very large so I end up running these clean up commands once in a while and I find it to work fairly well. I haven’t seen ay out of space issues in a long time.

for item in $(docker ps -aq); do
docker stop $item
docker rm $item
done

for item in $(docker images --filter dangling=true -q); do
docker rmi $item
done

I’m not sure if this helps. But figured I’d share.