Disk space not cleaned up in Windows

Background : We deployed some springboot applications using docker EE in Windows server 2016. Whenever we do an version upgrade of the application, the disk space is not cleared.

I used the image prune command, and also removed exited containers.
I removed dangling images as well using this command

docker rmi $(docker images --filter “dangling=true” -q --no-trunc)

Expected behavior

When the docker containers and images are cleared,the disk space should be cleaned up as well.

Actual behavior

The disk space is not freedup and I have to manually run this command to delete all contents in C:\Programdata\docker

robocopy C:\PURGE C:\ProgramData\Docker /PURGE


host distribution and version : Windows Server 2016 LTS
Docker Version : Docker version 17.06.2-ee-10, build 66261a0

Steps to reproduce the behavior

  1. Pull images
  2. Start containers
  3. Stop Containers
    4 . Delete images using docker rmi containerid



Can confirm on Win 2016 (1607) LTS, updated

Version: 17.06.2-ee-10
API version: 1.30
Go version: go1.8.7
Git commit: 66261a0
Built: Fri Apr 27 00:42:30 2018
OS/Arch: windows/amd64

Version: 17.06.2-ee-10
API version: 1.30 (minimum version 1.24)
Go version: go1.8.7
Git commit: 66261a0
Built: Fri Apr 27 00:54:58 2018
OS/Arch: windows/amd64
Experimental: false

I deleted all containers and ran
docker image prune -a

against a 90GB windowsfilter folder, and it recovered only 16GB, leaving a large number of what I can only assume are orphaned image layers behind.

When I look for images (docker image ls -a) or containers (docker container ls -a) no records are returned. See screenshot.

I see the read-only attribute and of course can find a way to purge these, but want to make sure I understand whether or not I’m doing something wrong in my procedures.

My main concern is that development workstations and CI/CD servers will quickly get storage overruns through image build iterations.

Try to use this https://github.com/jhowardmsft/docker-ci-zap

Thanks for that, it’ll get me through for now. Seems like a heavy handed workaround, though.