Passing environment variables to Docker Daemon

Generally speaking, how can I have docker daemon use environment variables?

In my specific scenario I need to pass the AWS credentials to the docker daemon in order to configure the awslogs log-driver.

What I get with a basic test such as:
docker -D run --rm -it --log-driver=awslogs --log-opt awslogs-region=eu-west-1 --log-opt awslogs-group=test-group --log-opt awslogs-stream=hello-world hello-world

is

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors.

While the AWS credentials are available as environment variables (the aws cli works like a charm), the docker daemon is not using them. Any ideas?


My current configuration from docker version:
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: windows/amd64

Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: linux/amd64

You can configure daemon.json in Docker for Windows: https://docs.docker.com/docker-for-windows/#/docker-daemon

Details here: https://docs.docker.com/engine/reference/commandline/dockerd/#/daemon-configuration-file

I donā€™t know for certain whether itā€™ll work for you use though.

Out of curiosity, why do you want to log to AWS from Docker for Windows?

Hi Michael,
already tried that way - not working. If I add the AWS credential to the JSON, docker gives an error when restarting.
As far as I understand only a predefined list of variables can be configured through the JSON:

My goal is to have my container logs published in AWS Cloudwatch (Iā€™m using Windows as development environment)

Iā€™m actually attempting to do the same thing on docker-desktop Windows/WSL so that I can emulate in development what production docker images running on EC2 instances logging to awslogs will be doing.

The docker-desktop linux distro is a special purpose distro and seems to not behave at all like typical linux: Adding environment variables to /etc/environment doesnā€™t get them to the daemon. Adding /root/.aws/credentials doesnā€™t get them picked up by the daemon. A ps -ef|grep docker shows /usr/local/bin/dockerd which doesnā€™t exist in /usr/local/bin.

So thereā€™s no ā€œtypicalā€ way to get the AWS credentials on Docker Desktop running Linux containers on Windows.

I have a reddit post as well, where I commented that Iā€™m looking into the awslogs source code on moby/moby to see if adding the credentials as log-opts would work. Doesnā€™t look like it would be that difficult. Iā€™m setting up a go dev env and learning enough go to make the changes and test them.

Yes, it is. Docker Desktop runs a virtual machine. The OS inside is LinuxKit (alpine based). In the virtual machine different components are running in containerd containers. The Docker daemon is running in the container called ā€œdockerā€. Even if you can log in to the virtual machine, the docker daemon will not see those variables.

I would read this documentation:

and try the highlighted method. I canā€™t try it, because I donā€™t use AWS, but Docker Desktop mounts your home into the virtual machine and it is accessible by the Docker daemon.

You can provide these credentials with the AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , and AWS_SESSION_TOKEN environment variables, the default AWS shared credentials file (~/.aws/credentials of the root user), or if you are running the Docker daemon on an Amazon EC2 instance, the Amazon EC2 instance profile.

I also found a similar issue on StackOverflow with an accepted answer:

It is about Kubernetes, but the post refers to that folder directly. For you that is probably not necessary

Update:

Sorry, I missed that part of your post.

I have a default AWS profile with proper permissions setup in my Windows home under %USERPROFILE%/.aws and if I launch:

docker run --name test -d --log-driver=awslogs --log-opt awslogs-group=/docker/busybox2 --log-opt awslogs-create-group=true --log-opt awslogs-region=us-east-1 busybox sh -c "while true; do $(echo date); sleep 1; done"

I get:

docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors.

So I donā€™t think the docker daemon is getting my Windows home folder.

After your last reply I realized that folder should be in the home of the root user, but Docker Desktop does not mount it that way. It is accassible by the Docker Daemon, but it will not use it. You canā€™t even create a folder manually because the root filesystem os read-only. Do you have any feature in Docker Desktop that is required for this task? If you donā€™t, you can try to install Docker in a WSL distribution and test the logging driver there.

Itā€™s mainly so that I can use awslogs in development the same as a live deployment.

I am currently making updates to moby/moby for the awslogs log driver to take log-opts for AWS creds (and also to make log group dynamic like you can with log stream via tag ā€“ might just add an opt that allows picking whether tag applies to group or stream). Itā€™s my first contribution so itā€™ll be a bit before I have the whole process down, and not sure the likelihood of it ending up getting accepted, but it would help for:

  1. Docker desktop: Allow config AWS credentials
  2. Any docker installation where you want the docker awslogs driver to use specific AWS credentials. I actually have a use case for this now for a vendor. An EC2 instance runs in our account but weā€™d like the docker logs to go directly to their CloudWatch logs, and not have to mess with a subscription. CloudWatch logs donā€™t support cross-account logging so we actually need the logger to have different credentials (from their account) than the EC2 instance profile.
1 Like

First of all, thank you for helping the community with your contribution! :slight_smile:

If you could add an option to read the credentials from a custom file, instead of adding the credentials to the daemon config, I think it would be more likely to be accepted.

If you want to make it easier to use on Docker Desktop, you could also open a feature request in the roadmap repository: