Docker hub autobuilds started to fail a few hours ago (copy from private repos within the same organization)


Everything has been working fine for several months until a few hours ago. Here are the logs:

2022-09-26T16:41:41Z #6 FROM
2022-09-26T16:41:41Z #6 resolve 0.0s done
2022-09-26T16:41:41Z #6 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
2022-09-26T16:41:41Z #7 [internal] load build context
2022-09-26T16:41:41Z #7 transferring context: 482.35kB 0.0s done
2022-09-26T16:41:41Z #7 CANCELED
2022-09-26T16:41:41Z ------
2022-09-26T16:41:41Z > FROM
2022-09-26T16:41:41Z ------
2022-09-26T16:41:41Z ERROR: failed to solve: failed to load cache key: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
2022-09-26T16:41:41Z Build failed using Buildkit (1) is a private repo within the same organization. I wonder why authorization fails all of a sudden considering that it worked for quite some time until now (as you can see in the screenshot).

Do I need to do any extra settings to be able to copy from private repos within the same organization?

Here are logs from when authentication worked:

#6 [stage-0 1/33] FROM
#6 DONE 0.0s
#7 [auth] private/app-common:pull token for
#7 DONE 0.0s
#8 [internal] load build context
#8 transferring context: 1.23MB 0.1s done
#8 DONE 0.1s
#9 resolve
#9 resolve 0.4s done
#9 DONE 0.4s

More info: failed builds have the following in their logs:

WARNING: Support for the legacy ~/.dockercfg configuration file and file-format has been removed and the configuration file will be ignored
1 Like

I have rebuilt one of my automated builds to see if it works. It worked. I also checked that message about dockercfg, because I remembered I have met a similar message before but in my case it was “will be removed” not “has been removed”.

When I first recognized something was wrong with the virtual machine versions, I reported it on GitHub

It looks like docker in your VM has changed somehow, but the configuration file is still there.

Please, report it as a bug in the hub-feedback repository

I tried to search for similar isues, but before you open the issue, do the same. Maybe you find something that I could not. There was an issue long time ago in 2016

but the message “has been removed” is suspicious.

I also checked the status page of the Docker services, and I don’t see any known issue there

I am subscribed to that page to get notification when something happens, but I didn’t get anything recently.

Thank you for your reply! I managed to get the autobuilds working again by explicitly creating an access token and inside hooks/pre_build doing a docker login that uses the access token. The private docker image I’m pulling from and hosted in the same organization is found now and the pulling error is gone.

I think there was an update today on docker hub because the build logs look different from 12+ hours ago and now, e.g. each line is timestamped.

Here is my solution:

  1. Create an access token:
    Go to Docker Hub (Account Settings → Security) and create a new access token. Put the access token in a file e.g. hooks/token
  2. Make a pre_build hook where you authenticate first:
    Here are the contents of my hooks/pre_build file:

cat hooks/token | docker login --username <username> --password-stdin

where you need to replace <username> with your username, which holds the token.

Also make sure the pre_build file is executable: run chmod +x hooks/pre_build


@remusmp - We.too facing the same issue, It’s really appreciated if you provide the docker login syntax on pre_build hooks.


This started to happen on our builds as well. We have multiple repositories within the same organization, and we use FROM private_build as build001.

It fails with the same error as the original author of this post mentioned.

Hello. I have edited my first reply and marked it as a solution to this ticket. Please let me know if it helps.

Thank you for the info… i was just working this out and saw your update…


This is the same thing that we’re started experiencing recently

Thank you for sharing. I made one change, instead of storing the token in your code you can create a Build Environment Variable and put it there. I used TOKEN. Then modify the code like this.

echo $TOKEN | docker login --username <username> --password-stdin

It worked for me.


Thanks! I think the build env way you’re suggesting is actually better than storing the token in a file.

Yes, I’m recently seeing the same issues. Thanks for your solution. A bit of clarification… where is the hooks/pre_build file? Is it in the repo from github that you link to the dockerhub image?

I got it working… many thanks for the help!!! Would have been nice for dockerhub to mention this ahead of time

The hooks/pre_build file goes into the git repo. I would also recommend storing the token in a build environment rather than in a file in the git repo. Unfortunately I cannot edit my answer above.

We’ve changed our setup to have images built from github using github actions and local github runners. That way, you can use secrets in github to hold the personal access token and you don’t need to manually use “Docker login” as there are github actions for that purpose. We then push the resulting image to docker hub, rendering it merely as a registry. I hope this bug gets resolved quickly. For now, we didn’t get any response on our supportrequest yet.