If I understand it correctly, Docker Content Trust (Notary) is supposed to enable me to pull a Docker image from a public registry while having confidence that the image has not been compromised by a malicious actor.
However, when I experiment with Docker Content Trust, I see that on the first pull from a new repository, Docker does not seem to verify which keypair was used to sign the digest stored in Notary. On subsequent pulls from the same repository, Docker does verify that the same keypair was used as on previous pulls.
How does Docker Content Trust protect against a bad actor pushing a malicious image to a new repository? It seems as though we need some way for the Docker CLI to import a known good public key to prevent this exploit. I feel like I must be missing something here.
Notary (and TUF in general) uses a trust on first use model (TOFU).
Another example of a TOFU is the first time that you ssh to a host and you are prompted to accept the ssh key fingerprint.
Rather than forcing users to accept the key fingerprint in the notary TOFU approach, the public key is sent over an https connection which does establish some level of security.
Following that issue (and any of the other issues it references) will be the best source of info for any new features that may be developed as an alternate to the current TOFU model.
Would you mind talking a bit more about how “the public key is sent over an https connection which does establish some level of security”. Does this refer to the client running “docker pull” retrieving the key from Notary? I’m not sure how this guards against a malicious actor having pushed this key to Notary in the first place.
The analogy with SSH is a good one. SSH allows you to configure your system so that it will reject a previously unknown public key. Does Docker have any way to do this?
So, the only way that a key can get into notary in the first place is if the user trying to upload the key has credentials that grant them access to push content in the first place.
For example, you wouldn’t be able to push up a key to notary for a repository in my programmerq namespace since you don’t have access. Only I can do that since I have the password for that account.
Once that public key is in notary, it is made available only over https. This means that when you go to do a pull, your client will expect there to be a valid SSL certificate running on the notary service.
My understanding was that the main purpose of Docker Content Trust was to allow clients to pull images in confidence even when the registry (and Notary server) was untrusted.
My particular use-case is the use of a Docker registry hosted in the cloud by a third party (not Docker).
In that case, you will want to make sure that notary instance does the same thing that the public notary service does-- use an authentication system and uses a trusted ssl certificate and has at least basic security measures taken.
One of the big value add of TUF (and therefore notary) is the TOFU model. If you’ve ever seen other signing systems that require users to manually import keys as trusted, people end up… not doing that.
There are definitely some us cases in which the TOFU model is advantageous. Perhaps future versions of Docker might include the ability to configure it to be disabled, coupled with the ability to import public keys via the command-line – that would certainly be useful for our use case, where we have full control over the machines that are pulling the images and want the whole process to be as watertight as possible.
Thanks for the conversation – it was very useful to get a Docker insider’s perspective on this.
Do you know if there is any way to force the pushing Docker client to use the same “tagging” public key to encrypt digests across all Docker repositories? Perhaps by pre-populating the relevant files in the $HOME/.docker/trust directory? I’ve experimented with this a little and haven’t been able to get it to work so far.
I don’t know the procedure off the top of my head, but I believe you can use the notary command and export/import keys from one TUF repo to another. That may be one way to accomplish it, but I don’t think that you get too much value out of using the same key in two places. What problem are you trying to solve?
– if we want multiple machines to push to the same repos, we would have to ensure that they share the same tagging keys.
– if we want to audit pulling machines to make sure they have the correct certs for every repo they pull, this would be easier if the same key is used for all repos.
Another approach would be to utilize key delegation. I’m still learning about that, but the idea is that each pushing workstation has its own set of keys, and all of those keys are signed by the same root key. If one workstation is compromised, you only need to rotate out that key and not every key on all systems involved.
At the end of the day, if all your keys have been signed by your root key, you have a built-in audit path already.
I’m trying to follow up on how to audit the root key. When multiple repos have been created by the same docker client (therefore using the same encrypted root key on its filesystem), I see slightly different public keys in the “root” role in root.json for each repo. The public keys are often identical for the first 100 characters or so but then deviate. Do you know why this is? How could I audit that the public keys were all derived from the same root key?