How to transfe files between Docker containers?

I have a docker-compose file that defines two services: Caddy and EMQX. Caddy is a web server that generates SSL certificates using Let’s Encrypt, and EMQX is an MQTT broker that needs to use these certificates for the MQTTS protocol.

I want to share the certificates between the two containers, so I have created a volume named certs and mounted it to both the /data/caddy folder in the Caddy container and the /opt/emqx/etc/certs folder in the EMQX container.

However, I have encountered a problem with the file ownership. Caddy creates the certificates with the root owner, but EMQX runs as the emqx user. This causes the certificates in the EMQX container to be owned by root instead of emqx, which prevents EMQX from using them.

I know I can use the chown command to change the ownership manually, but I don’t want to do that every time. Is there a way to automate this process, or a better way to share the certificates between the containers? I am looking for a solution that works with docker-compose. Thank you.

A container is “just” an isolated process, a volume just represents a private part of a file system.

Either both processes run with the same user id, or you need to make sure the writing process writes the files with read permissions for the group id it shares with the reading process.

This is simple unix file permissions, regardless whether those processes run on the host or inside a container.

I am not sure how docker compose would be able to help in this scenario.

The least effort solution would be to actually terminate tls for the mqtt traffic in caddy, and forward it without it unencrypted to the target. Of course this would only be applicable if you don’t rely on mtls.

The next least effort solution could be to run caddy as unprivliged users with the same uid, as the emqx container. Of course this will only work, if the caddy image supports it like that.

1 Like

I am using the official Caddy image for my Caddy container. This image does not have a user with the ID 1000, which is the ID of my user on the host machine. Therefore, when I run the container, it will map my user ID to the ID 0, which is the root user in the Caddy container. However, the EMQX image has a user with the ID 1000, which is the emqx user in the container. Therefore, it will map my user ID to the emqx user in the EMQX container.

This causes a problem when I share the certificates between the two containers using a volume. The certificates are created by Caddy with the root owner, but EMQX needs to access them as the emqx user. This prevents EMQX from using the certificates.

I think the solution would be to customize the Caddy image and add a user with the ID 1000 to it. I can use the addgroup and adduser commands in the Dockerfile to create the user and group. For example:

RUN addgroup -g $HOSTGROUP caddy
RUN adduser -s /bin/bash -DG caddy -u $HOSTUSER caddy

Then, I think the file ownership issue will be fixed. what is your opinoin?

Maybe I just misunderstand what you wanted to say here, since your original question contained how the cert generation could actually work, but I have to note that above quoted part of your message is not how containers work. Normally the ID of a user inside and outside of a container is the same. In case of rootless Docker your user id could be mapped to root inside the container, but that is not conditional. Other non-root users in th container would have an ID in the container which is much larger outside, but it doesn’t depend on whether the user exist on the host or not. All that matters is the user id.

Again, it is not conditional. Doesn’t matter if the user exist in the container or not. A user is just a name mapped to an ID. How did you install Docker? Share a link please.

1 Like

I have installed Docker from the official site and added my user to the docker group to run it without using sudo. My main question is how can I generate certificates for my EMQX broker using Docker. I have been looking for a solution and I came up with an idea, but I think it is wrong about how containers work. Do you have any idea how can I solve my issue? I want to generate certificates for my EMQX broker manually. The other parts of my text are my setup for this solution and you can ignore them if you have a better solution.

I asked for a link because “official site” might be obvious to you, but we have seen lots of misunderstandings because peope thought of something else. Not to mention there is a difference between Docker Engine and Docker Desktop and I also mentioned “rootless Docker”.

The topic title is about transfering files and this is what we focused on. @meyay wrote about the same in his last post what I would have. There is no Docker solution here. It is just caddy, letsencrypt and another service. Even without Docker, if your services run as different users, you need to make sure one can read the file the other generates. So at least make the files readable for the other user.

If cddy can run as another user, and that makes let’s encrypt generate certificates that is readable by the other service, that would indeed solve your problem. Just creating a user in a container would not solve anything. If it does, then we still don’t understand how you installed and configured Docker to work that way or how the images work that you use.

1 Like

I apologize. You are right. I followed the instructions from these web pages:

Install Docker Engine on Ubuntu | Docker Documentation

Post-installation steps for Linux | Docker Documentation

I just ssh to my server with a non-root user (the same user I installed Docker with) and run the docker compose up -d command. The easiest solution for me is to change the owner of the generated certificates to a user with the ID 1000, but I think there is no obvious way to do that.
and this is my docker compose:

    image: caddy:alpine
      - './Caddyfile:/etc/caddy/Caddyfile'
      - 'caddy_certs:/data/caddy/certificates/'

    image: emqx:5.3
      - caddy_certs:/opt/emqx/etc/certs/caddy

To automate the process of sharing certificates between Caddy and EMQX containers in a docker-compose setup, you can utilize a custom entrypoint script that runs within the Caddy container and handles the certificate ownership before starting the Caddy server.

This script can be configured to change the ownership of the certificates to the emqx user, ensuring that EMQX can access and utilize them for the MQTTS protocol. This approach eliminates the need for manual intervention and provides a consistent solution for sharing certificates between the containers.

Isn’t caddy creating the ssl certificate using lets encrypt? How would an entrypoint script help, if caddy creates a new certificate?

This is already the cleanest solution (that is if caddy allows to be executed as unprivileged user).

You might find this issue interesting: Run as non-root inside container · Issue #104 · caddyserver/caddy-docker · GitHub

1 Like