Docker Hub Automatic Build failed with no error

Hello,

i’m stuck in automated docker hub builds. I have done some big changes and improvements in my git repository. At the beginning I wasn’t surprised that the automatic builds fail with the size of changes. After i was done with my changes the build ends correctly (with ‘Build finished’), but the state says ‘Failed’. All jobs were done and I can’t find any error message.
I’m stuck here. I have no idea how to find the reason for the state. Has anyone an hint for me?
If you are interested what is in the repository, it’s public available https://github.com/Hackebein/docker-ts3server

My Automated Build Settings:

{
  "autobuild": true,
  "build_context": "/",
  "buildsource": "/api/build/v1/source/b0a71d8c-3b4e-417e-8a0c-abbbfa42e4c3/",
  "dockerfile": "Dockerfile",
  "nocache": true,
  "resource_uri": "/api/build/v1/setting/b2b7596e-d66a-4807-8f18-c92620aa5350/",
  "source_name": "master",
  "source_type": "Branch",
  "state": "Failed",
  "tag": "{sourceref}",
  "uuid": "b2b7596e-d66a-4807-8f18-c92620aa5350"
}

Here is a shortcut from the result of the build: (sorry i can’t upload the full json log, because i’m limited to 4MB)

{
  "action": "Build in 'master' (96823937)",
  "build_code": "bvrwkfpyjlzx3v4bt5deabe",
  "build_context": "/",
  "build_logs": "Building in Docker Cloud's infrastructure...\nCloning into '.'...\n\nWarning: Permanently added the RSA host key for IP address '140.82.113.3' to the list of known hosts.\r\n\nReset branch 'master'\n\nYour branch is up-to-date with 'origin/master'.\n\n [...] \n\nBuild finished\n",
  "build_tag": "master",
  "can_be_canceled": false,
  "can_be_retried": true,
  "commit": "968239373c865c83d2ca53394c2498152a3c7751",
  "created": "Mon, 8 Jul 2019 23:32:20 +0000",
  "dockerfile": "FROM scratch\n",
  "end_date": "Tue, 9 Jul 2019 03:40:41 +0000",
  "ip": "89.0.47.38",
  "is_user_action": true,
  "location": "unknown",
  "method": "POST",
  "object": "/api/build/v1/setting/b2b7596e-d66a-4807-8f18-c92620aa5350/",
  "path": "/api/build/v1/setting/b2b7596e-d66a-4807-8f18-c92620aa5350/build/",
  "resource_uri": "/api/audit/v1/action/5ada79f1-471a-4133-b80b-212ce240c585/",
  "source_repo": "Hackebein/docker-ts3server",
  "start_date": "Tue, 9 Jul 2019 01:26:47 +0000",
  "state": "Failed",
  "user": "hackebein",
  "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36",
  "uuid": "5ada79f1-471a-4133-b80b-212ce240c585"
}

As i find out the problem is based on container-diff. Currently i have no idea why this affected the build result after the push script has ended successfully.

Stay in line if you are interested on the result how to use container-diff in your advanced build scripts. Or send me a hint if someone has already done it.

Result: It’s a limitation of the AWS Server, 60GB Storage. container-diff creates a cache. After disable the cache ("-n") the status failed without error doesn’t come back. I haven’t done advanced/more tests … but i think you get no error message if the storage is full. You only saw the problem on result status with out any message.