Experiencing same issue here on 1.11.2-beta15 - no space left on device, even after wiping all of the images.
Diagnostic ID 6B759845-F512-4581-8B47-0D9A0C12F45C
Share and learn in the Docker community.
Experiencing same issue here on 1.11.2-beta15 - no space left on device, even after wiping all of the images.
Diagnostic ID 6B759845-F512-4581-8B47-0D9A0C12F45C
Thanks. You saved me a ton of time here. I hope that Docker provides a more convenient way to grow the qcow2 image in the future.
This can happen just from pulling a lot of images and using a lot of containers. It seems the original qcow disk image was smaller. Latest download I ended up with a 64G. Others have some parts of thisin the thread, but hereās an expansion procedure that Works For Me ā¢. This uses no additional tools outside of the docker beta for osx download.
First, Iād clean up exited containers and images:
docker rm -v $(docker ps -a -q -f status=exited)
docker rmi $(docker images -f ādangling=trueā -q)
If that doesnāt clean up enough, you might consider expanding the disk. This is potentially destructive, so make a copy of your cow2 file first:
use qemu to see current disk info, then add 10GB
export DOCKER_DISK=~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.cow2
cp $DOCKER_DISK $DOCKER_DISK.backup
/Applications/Docker.app/Contents/MacOS/qemu-img info ${DOCKER_DISK}
/Applications/Docker.app/Contents/MacOS/qemu-img resize ${DOCKER_DISK} +10G
Restart docker from the panel or use connect to vm below to connect and reboot it.
Connect to the VM:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Press enter if your screen is blank. Login is root/no password
make sure that the new disk space is reflected
fdisk -l
make sure that your main linux partition is the second one. Weāre going to delete it and re-create it using the remainder of the disk
Now the scary part Remember that backup? Here we are deleting the partition and re-adding with a larger size. The data should remain intact.
fdisk /dev/vda
key presses, in sequence
p
d
2
n
2
enter
enter
w
reboot
At this point you can reconnect to verify the space is there, and do docker images to ensure everything is still there. If not, restore the backup.
@b2jrock could you explain what the āscary partā does in detail? I got stuck at the second ā2ā as there was no such optionā¦
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): 2
Value out of range.
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):
It could be your partition layout is different than mine. The first āpā should have output your partition table. Mine looks like:
Device Boot Start End Sectors Size Id Type
/dev/vda1 2048 8196095 8194048 3.9G 82 Linux swap / Solaris
/dev/vda2 8196096 134217727 126021632 60.1G 83 Linux
So in my case I was deleting and re-adding the second (2) partition, but extending it to the end of the expanded disk.
This is becoming really really annoying.
I think at the very least you should expose a config setting where one can set the directory location to another disk.
And even better would be if the big images files were actually hosted and visible on the Mac, so one can manually remove a file or two when the docker machine doesnāt start up again. So basically mount the files into the docker machine like a shared file system.
If that is technically not possible, then mount the directory where the big files are on the docker machine to a directory on the mac host, so if and when the docker starts up again, one can go and remove files that are no longer required, without having to throw away the whole gcow2 file and start all downloads all over again.
Is there a viable workaround or solution this?? I am running into disk space limit as well and itās capped at 18GB which is pretty small considering some of the images can be rather large.
Version 1.12.1-rc1-beta23 (build: 11375)
2f0427ac7d4d47c705934ae141c3a248ed7fff40
El Capitan 10.11.5
is there no way to use a specified (external) disk for the space-consuming files? This is INCREDIBLY frustrating on a laptop with limited space.
Iām experiencing same problem, canāt launch any docker images because of that, this is quite annoying, are there any plans to fix this?
For me deleting the file Docker.qcow2 in /Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux and restarting docker solved the issue, but bear in mind that this will remove all your containers and images.
Our entire team uses docker and we use it to run integration tests on jenkins and locally on our environments. We constantly run out of space because of this issue Hope it will be resolved in not very distant future. Right now we keep deleting the Docker.qcow2 file.
Have you been able to find anything about this? I really want to be able to move my docker images to an external disk as well. I could do this with docker-machine, but canāt find anything about this on Docker for Mac.
Bump. This is still an active issue for me with the most updated Docker Beta version.
About out of disk space, has anyone even seen something like this?
We are using Docker 1.12.1 on our Jenkins slaves vApps in our CI infrastructure.
We have a problem. Could you support us to explain whatās happen?
Thanks in advance.
Our main question is to know why docker-compose creates and keep images with that strange name, concatenating previous to the base image name such set of Ids.
Problem Description:
Jenkins job
https://fem108-eiffel004.lmera.ericsson.se:8443/jenkins/job/eniq-export-transformation-provider_Verify_docker/
suffer of really frequent container name corruption (1 every 4 run).
docker ps show weird container (holding resource) like :
ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_ff3acbffe419_docker_jboss_1
or
8b2b1935d74e_8b2b1935d74e_docker_jboss_1
and removing those containers jobs is working fine.
Logs with faulty container are attached.
Made a preliminary inspection on faulty Vapp (Jenkins_Fem108_Docker_Slave_12):
8b2b1935d74e_8b2b1935d74e_8b2b1935d74e_docker_jboss_1 /bin/sh -c sleep 20 && cp ā¦ Exit 128
docker_dps_integration_1 /bin/sh -c /usr/sbin/xinet ā¦ Up 5019/tcp
docker_jboss_1 /bin/sh -c cp /opt/ericsso ā¦ Exit 128
docker_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp
jboss has 2 different āimagesā one looks corrupted as in description, in addition also State Exit 128 could be investigated.
I am seeing the same behavior
Same issue here. Does anyone found a solution for this? Or at least change the location where the images can be stored?
I occasionally see that when you re-build the image name. So the tag references is lost and that container is now labeled by the former hashID_directory_location_N rather then say image_name_1.
This would be my guess though that has nothing to do with the out of space topic.
Same issue, we need a solid solution
I too am experiencing this issue. Has anyone nailed down exactly what in ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
is growing?
Looks like this is still an issue, but on the 1.13.0-rc7 beta the qcow2 file caps at 60GB on my 1TB host. Was wondering if thereās at least a setting one would configure to specify whatās the MAX the file should grow to given it doesnāt seem to be growing indefinitely like most people reported on previous betas?