The correct way to use gpg inside dockerfile

Hi all.
I’m running into troubles building a image containing GnuPG commands, which needs the user to input password.
I’m using GnuPG inside the Dockerfile but cannot make it run correctly. I started trying without gpg-agent, and when using gpg --sign, it is supposed to prompt asking for password and it failed. So I went to gpg-agent and tried to change pinentry method to CLI and used echo to pipe input the password into gpg. That part of Dockerfile is like: (The base image is debian:jessie)

USER tails
RUN  //some gpg commands here, just to initialize the .gnupg directory.
RUN touch /home/tails/.gnupg/gpg-agent.conf \
        && echo 'pinentry-program /usr/bin/pinentry-curses' > /home/tails/.gnupg/gpg-agent.conf \
	&& gpg-agent --daemon --options /home/tails/.gnupg/gpg-agent.conf 
RUN echo password| gpg --sign-key --no-tty --passphrase-fd 0 key

But it still failed saying cannot get input from tty. so I searched for it and it seems gpg-agent should be running before any gpg commands. I tried to force connect gpg to the agent by

RUN gpg-connect-agent reloadagent /bye

It didn’t work. gpg-connect-agent prompted the error complaining: gpg-connect-agent: can't connect to the agent: IPC connect call failed.

So I tried to remove the gpg line and let gpg-agent run before any gpg commands. This time it complains that signing key file doesn’t exist, which should be created since the first run of gpg.

What is the correct way of running gpg in image building stage? Thank you for help.

Don’t.

If it’s possible to run gpg --sign, then you’ve embedded your private key into a Docker image in a way that can’t be removed later, and it will be susceptible to offline attacks.

If you’ve already built and distributed an image this way, you should consider your private key compromised and revoke it.

I’ve found for many things that you need some sort of preprocessing setup that does things like compilation before invoking the Dockerfile. I’m not aware of any standard for this, but we have a local convention that Dockerfiles have setup.sh scripts next to them that do whatever work is necessary. I’d sign your image there, where it can use the keys available on the host system.

Two comments on this sequence:

(1) After the “some gpg commands to initialize .gnupg” step, there is a Docker image layer that contains key material. You can find the image ID with docker history, docker run that specific image, and docker cp the key out.

(2) If the gpg-agent --daemon launches a background process, once it returns and the RUN command finishes, whatever background processes got launched as part of the RUN command will be killed. (So on the next RUN command, the agent isn’t running any more.)

Thank you so much for such an informative reply.
I see, so right now I’d simply give up using gpg inside docker building, due to security consideration. Luckily since the problem I haven’t done the image yet. Basically it was only a temporary image I would use to run an application from another distro.
BTW, if I would like docker image running some daemon which can persist, what should I do? The way I can think of is
RUN some daemon && other applications
But this way the daemon only persists until the end of this RUN. Seriously I cannot put all the commands afterwards in a single RUN, which would be messy. Are there any recommended tricks of doing so?