Config.json stored in other directory (not ~/.docker/)

Is there anyway to point docker to look at another directory besides ~/.docker/ for the config.json auth credentials? I have write access to only one directory that is mounted in my container; the rest of the container is a read-only filesystem. Thoughts?

Yes there is


Name Type Default Description
--config string /root/.docker Location of client config files

Environment variables

The following list of environment variables are supported by the docker command line:

Variable Description
DOCKER_CONFIG The location of your client configuration files.

I also have a gist where I used it:

Custom shell to use Docker Desktop and Rancher Desktop on the same machine using their own clients

1 Like

This is exactly what I needed so thank you!

EDIT: when I point docker to my config dir, it still tries to rename the config file which can’t happen since I’m on a read-only file system. The exact error is:

Error saving credentials: rename /mydir/config.json1701699760 /mydir/config.json: operation not permitted

It’s as if under the hood docker still needs write access despite the fact that I’ve set the config file myself? Any thoughts?

Yes, Docker needs write access since even if you change the default location it will still be used for the same purpose and Docker saves some information in the config json. Depending on the credential helper it can be the credentials or at least the registries you logged in to, Why it tries to rename the file I have no idea, but I am also not sure why the filename has numbers in the extension. How did you set the variable exactly? The value of the variable has to be the path of the folder not the file itself.

Why did you expect Docker not to write the file because of the changed location? It looks like both files in the same folder or did you just use /mydir fo both in your post as an example?

I ran the following command:

docker --config /mydir/ login <registry_name>

where /mydir/ contains a preloaded config.json correctly formatted specifying my registry and b64 encoded auth (that I created as root in an initContainer and that all users and groups have read access too)

I was hoping that if I had a pre-written config.json, docker wouldn’t need write access to login to my registry.

It seems docker still tried to write my config.json to some sort of temp file which it called /mydir/config.json1701699760 if I had to guess. Any way I could login without needing write-access? I’m working with a read-only filesystem across all users and groups at run-time.

So the config with the number at the end was the copy not the one that was copied. That makes sense. I am not sue what you want is possible, I have never thought of that, but if I understand it correctly, that config file is in a Kubernetes init container so you could create a tmpfs volume and copy an existing config.json to that tmpfs volume when the container starts. That way you could have read-only filesystem and the config file in memory. Since the file is small, it wouldn’t take much memory.

So the main container already does have access to config.json, the issue is when I try to docker login in the main container it throws the “read-only file system” error.

Seperate question: what role does the docker socket play in all of this? Does docker login need to happen once on the docker socket, and then I’m good to go? Was looking into if I mounted my cluster config docker socket to my container…

I know, but if you move the file to the memory, that will be writable.

Nothing. The login is basically making sure you always send the proper credentials in the request when it requires authentication. Those credentials have to be stored somewhere. How long that stays active depends on the credential store, but it could be valid until logging out.

Are you trying to use a single config file to allow different users to log in to the same registry at the same time? I don’t think that can work. Client configs are meant to be used by one user. If you allow multiple users to log in to the same registry, each login would override the previous one. If you have one service acccount and multiple users can use the same service account, once you logged in everyone could use the credentials. You can also read about the credential stores here:

I wasn’t clear enough. The read-only filesystem is a k8s policy constraint; regardless of the unix permissions on the config.json file, I won’t be able to write to it in my pod.

That’s what I would assume. Seems strange to me that docker login would need to write to config.json even if it is pre-loaded with b64 auth. I’m aware of different cred store techniques and I think I’d need write access for allow of them (pass is the applicable cred store for my use-case, but was hoping I could get away with b64). If I can use docker-credential-pass non-interactively, I could generate auth at build-time. Not sure if docker would still write to config.json though; I’d assume not.

That shouldn’t affect the memory and I’m talking about writing memory, not the container’s root filesystem. tmpfs is pretty useless if you can’t write it. Maybe I just don’t know about the policy you are referring to. Do you have a link to that?

As far as I know emptydir with medium=memory should work