Failure pushing multi-arch manifest to calico/node repo

I’m trying to fix a problem that started about 3-4 days ago where pushing manifests (the images that are multi-architecture where the correct image is pulled depending on architecture, I’m not sure if manifests is the correct name).
We use a tool https://github.com/estesp/manifest-tool to do this and the errors that come out are

ERRO[0059] Error trying v2 registry: http: unexpected EOF reading trailer 

FATA[0059] Inspect of image "calico/node:master-amd64" failed with error: http: unexpected EOF reading trailer 

See: https://semaphoreci.com/calico/node/branches/master/builds/1072 (expand the Job #1 and expand the 3rd box, the make cd command)

I have tried a couple things to fix or figure out the problem and am out of ideas:

  • I’ve tried deleting a bunch of old and unused dev images thinking maybe we were hitting some kind of tag/image limit.
  • I’ve tried deleting several un-needed ‘manifest’ images thinking maybe they were special and there was a different limit there
  • I’ve used the same code to push to a different repo tmjd/node and was successfully able to push there. (I thought maybe we’d broken something in our usage but that looks fine.)

My current effort is to try using some curl commands to access the v2 registry to see if I can reproduce the same error and perhaps get more information on the same error.

Any help or ideas are greatly appreciated. Thank you

Here is an additional errors that I can get when using the manifest-tool that I believe is relevant:

$ manifest-tool --debug inspect calico/node:release-v3.12-amd64
DEBU[0000] authConfig for docker.io:
DEBU[0000] endpoints: [{false https://registry-1.docker.io v2 true true 0xc420001380}]
DEBU[0000] Trying to fetch image manifest of calico/node repository from https://registry-1.docker.io v2
ERRO[0026] Error trying v2 registry: http: unexpected EOF reading trailer
FATA[0026] http: unexpected EOF reading trailer

Also I’ve noticed when using the docker hub site to look at the images, that sometimes the load time is super slow, making me think that something like too many images is causing the docker api or something internal to timeout.