It seems that the more tags/branches are pushed for a given repo, the more throttling is added to the build.
This means that my repo, which normally builds in 15-26min for a given branch, suddenly takes hours to complete, and all my branches times out. I think, if the build is being throttled, it also needs to extend the timeout based on the throttle factor, so they are allowed to finish.
When I modify my Docker, i test it out locally, and when it works, I push the master(/latest) branch to check it with the Docker auto-build. As part of my build I also output the timing for each part of my build. On a single branch push, the typical timing for my build looks like this (https://hub.docker.com/r/mstormo/centos_cuda/builds/b3gmdeobasm4rmfw4cadha7/)
Docker image parts build times:
00h01m57s : Basic packages
00h01m08s : Devtoolset
00h00m03s : Ccache
00h00m08s : Cmake
00h00m04s : Subversion
00h03m50s : Git
00h04m45s : Python 2.7
00h00m10s : OpenSSH Server
00h04m05s : Cuda 7.5
00h00m39s : Qt
===================================================
00h17m25s : Docker build process completed!
However, when I finally pushed all my branches, suddenly just the Git step reports reports this:
00h47m53s : Git built and installed!
That’s a huge difference, from 3min 50seconds, to 47min 53seconds. And with a build timeout of 2hours, it’s obvious that all the branches will fail to build.
Given that all the branches are queued up, shouldn’t the throttling be kept static, so build times are fairly predictable? I understand that there at times can be huge queues for the build machines, but it’s better that you wait longer for a build machine which has reasonable resources, than to have every build fail in the end. Maybe we need an overview of where you are in the global queue build queue, and the estimated time to next slot? (previous build time for a repo/tag/branch can be an indicator)