Docker hub autobuilds started to fail a few hours ago (copy from private repos within the same organization)

Hello,

Everything has been working fine for several months until a few hours ago. Here are the logs:

2022-09-26T16:41:41Z #6 FROM docker.io/private/app-common:latest
2022-09-26T16:41:41Z #6 resolve docker.io/private/app-common:latest 0.0s done
2022-09-26T16:41:41Z #6 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
2022-09-26T16:41:41Z
2022-09-26T16:41:41Z #7 [internal] load build context
2022-09-26T16:41:41Z #7 transferring context: 482.35kB 0.0s done
2022-09-26T16:41:41Z #7 CANCELED
2022-09-26T16:41:41Z ------
2022-09-26T16:41:41Z > FROM docker.io/private/app-common:latest:
2022-09-26T16:41:41Z ------
2022-09-26T16:41:41Z ERROR: failed to solve: failed to load cache key: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
2022-09-26T16:41:41Z Build failed using Buildkit (1)

docker.io/private/app-common:latest is a private repo within the same organization. I wonder why authorization fails all of a sudden considering that it worked for quite some time until now (as you can see in the screenshot).

Do I need to do any extra settings to be able to copy from private repos within the same organization?

Here are logs from when authentication worked:

#6 [stage-0 1/33] FROM docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad
#6 DONE 0.0s
#7 [auth] private/app-common:pull token for registry-1.docker.io
#7 DONE 0.0s
#8 [internal] load build context
#8 transferring context: 1.23MB 0.1s done
#8 DONE 0.1s
#9 FROM docker.io/private/app-common:latest
#9 resolve docker.io/private/app-common:latest
#9 resolve docker.io/private/app-common:latest 0.4s done
#9 DONE 0.4s

More info: failed builds have the following in their logs:

WARNING: Support for the legacy ~/.dockercfg configuration file and file-format has been removed and the configuration file will be ignored
1 Like

I have rebuilt one of my automated builds to see if it works. It worked. I also checked that message about dockercfg, because I remembered I have met a similar message before but in my case it was ā€œwill be removedā€ not ā€œhas been removedā€.

When I first recognized something was wrong with the virtual machine versions, I reported it on GitHub

It looks like docker in your VM has changed somehow, but the configuration file is still there.

Please, report it as a bug in the hub-feedback repository

I tried to search for similar isues, but before you open the issue, do the same. Maybe you find something that I could not. There was an issue long time ago in 2016

but the message ā€œhas been removedā€ is suspicious.

I also checked the status page of the Docker services, and I donā€™t see any known issue there

I am subscribed to that page to get notification when something happens, but I didnā€™t get anything recently.

Thank you for your reply! I managed to get the autobuilds working again by explicitly creating an access token and inside hooks/pre_build doing a docker login that uses the access token. The private docker image Iā€™m pulling from and hosted in the same organization is found now and the pulling error is gone.

I think there was an update today on docker hub because the build logs look different from 12+ hours ago and now, e.g. each line is timestamped.

Here is my solution:

  1. Create an access token:
    Go to Docker Hub (Account Settings ā†’ Security) and create a new access token. Put the access token in a file e.g. hooks/token
  2. Make a pre_build hook where you authenticate first:
    Here are the contents of my hooks/pre_build file:
#!/bin/bash

cat hooks/token | docker login --username <username> --password-stdin

where you need to replace <username> with your username, which holds the token.

Also make sure the pre_build file is executable: run chmod +x hooks/pre_build

5 Likes

@remusmp - We.too facing the same issue, Itā€™s really appreciated if you provide the docker login syntax on pre_build hooks.

@docker_support

This started to happen on our builds as well. We have multiple repositories within the same organization, and we use FROM private_build as build001.

It fails with the same error as the original author of this post mentioned.

Hello. I have edited my first reply and marked it as a solution to this ticket. Please let me know if it helps.

Thank you for the infoā€¦ i was just working this out and saw your updateā€¦

Thanks!

This is the same thing that weā€™re started experiencing recently

Thank you for sharing. I made one change, instead of storing the token in your code you can create a Build Environment Variable and put it there. I used TOKEN. Then modify the code like this.

echo $TOKEN | docker login --username <username> --password-stdin

It worked for me.

3 Likes

Thanks! I think the build env way youā€™re suggesting is actually better than storing the token in a file.

Yes, Iā€™m recently seeing the same issues. Thanks for your solution. A bit of clarificationā€¦ where is the hooks/pre_build file? Is it in the repo from github that you link to the dockerhub image?

I got it workingā€¦ many thanks for the help!!! Would have been nice for dockerhub to mention this ahead of time

The hooks/pre_build file goes into the git repo. I would also recommend storing the token in a build environment rather than in a file in the git repo. Unfortunately I cannot edit my answer above.

Weā€™ve changed our setup to have images built from github using github actions and local github runners. That way, you can use secrets in github to hold the personal access token and you donā€™t need to manually use ā€œDocker loginā€ as there are github actions for that purpose. We then push the resulting image to docker hub, rendering it merely as a registry. I hope this bug gets resolved quickly. For now, we didnā€™t get any response on our supportrequest yet.