I am relatively new to Docker and I have a unique idea that I’m not sure is possible to implement securely. I would greatly appreciate your insights and guidance on this matter.
I want to create a Docker template that includes an application with a persistent database. During the initialization of the Docker image, I would like to store an API key in the database. The goal is to ensure that the API key remains solely within the Docker environment, when i am the one who host the Docker image on my own systems.
However, I’m concerned about data privacy and the potential for unauthorized access. Here’s my understanding of the issue:
The Docker image and image creation site will be publicly accessible on public repositories so user can see I’m not using their api key except on the docker image creation.
The user-provided API key, and the user’s admin credentials, will be stored in the database within the Docker environment.
The Docker image will be hosted on my VPS, and I will have access to the underlying host system.
While the Docker volume feature allows for persistence, it also raises concerns about potential access to the database files on the host system. With root access, I could potentially modify the root password and gain access to the data.
My main concern is ensuring that the user’s API key remains secure and inaccessible to me or any other party. I want to provide users with the ability to host the application on my systems, set their own API keys, and have full control over their data. (They can copy the docker template and host it by themself of they secured hosting service)
I have extensively searched for a solution but haven’t found a definitive answer. My hope is that someone in this community has encountered a similar challenge and found a way to address it securely.
Thank you in advance for your time and assistance. I look forward to hearing your insights and suggestions.
I feel you need to rephrase your post in order to make more sense.
It can’t be right that everything in your post is an image.
A docker image typically encapsulates a minimal os, a main application/service, its dependencies and more or less clever self made entrypoint script. It is just a delivery artifact, which either is at rest on disk or in transport when pulled from or pushed to a registry.
When you use docker run or docker compose you create a container based on the image and start it, the entrypoint script is used to prepare configurations and then starts the main application.
My image may seem complex, but it’s actually quite simple. I have a basic Nuxt app connected to a database.
My goal is to deploy multiple instances of this application. Each time a new client want to use it.
I want to generate a new instance from my template ás a new organization
My initial thought was to build an image from my template first so i can set some env variables.
The ultimate goal is to simplify the deployment process for a new “client” and to provide the image for free if they wish to host it themselves.
My main concern is for users who choose to host on my server, rather than on their own. I want to assure them that I can’t access the API keys they set upon initialization (as the API incurs usage costs). However, from what I’ve read so far, it seems there’s always a way to read it either form the sql bd or the environment variables stored on my host. Cause as a root if my db is store phisicly on my system i could change root pass and connect to it.
The problem actually is not the complexity of the image - I have no idea what it does. The posts are too ambiguous to make sense the way they are written.
With access to the docker engine (docker cli, Portainer, or another docker management tool), you can exec into every container, and access every environment variable set in it. Even swarm secrets can not be protected from a user with access to the docker engine.
You might want to check if a secret vault like HashiCorp Vault + modifications on your app code to directly consume secrets from the vault could help to mitigate the problem.