Best practices for uid/gid and permissions?

Hi,

I’ve found a lot of pages that try to describe the best way to deal with permissions but it seems that all have a different opinion. :slight_smile:

I want to use a seperate user in my container just for security reasons.
For this user I set a fix uid and gid (9081 for both).

If I add USER myuser to my docker file all works as expected.
But if I use a bind mount I have of course wrong permissions on my host or in my container.

Becaue of the priviledged user I cannot change the owner of the directories.

From multiple pages and projects I’m currently using this setup:

In Dockerfile (very simplified version)

create myuser (with same group) and uid/gid 9081
chown -R myuser:myuser /app
chmod -R a+rwX /app
COPY docker-entrypoint.sh /
VOLUME /app/data
ENTRYPOINT ["/docker-entrypoint.sh"]

as you can see, there is no USER directive.
in the docker-entrypoint.sh I run
usermod and groupmod if MAP_UID and MAP_GID are set.
therefore myuser has the requested uid and gid.
Due to chmod in my dockerfile my app is world readable and can also be used with the new uid/gid.

to ensure correct permissions I also run
chown myuser:myuser /app/data
so the volume is owned by myuser
(I omit -R for faster startup)

as last step in my entrypoint I start my app with gosu to switch to my app user.
According some post gosu is needed for signal-handling because the script handles signals from docker to gracefully shutdown.

So my question: is this good practice to deal with filepermissions and different uid/gid for a privileged user?

I’m looking for an approach that works “everywhere”
locally for multiple developers
with docker, docker compose and kubernetes
with docker that runs with root and docker that runs with a non-root-user.

I look forward to your suggestions :slight_smile:

Feel free to ask, if you need more informations.

Thanks

Al

1 Like

no hints/tips/tricks/advises/links/…? :thinking:

Explain what you mean with “privileged user”.

In general you can create user and folders on host with matching permissions and pass the user-id and group-id to a Docker container, so bind mount is possible.

Everywhere, or just convenient for everyone?

When it comes to everywhere:
If your company happens to use OpenShift as Kubernetes distribution, you won’t be able to create a pod using that image: read-only file system (can’t write into the container fs, just into volumes), non-root user (can’t start container as UID=0), user namespace remapping (UID on host != in Container), Random UID (your container must be able to work with an arbitrary UID).

On the other hand, if a container can be created on OpenShift, it will run everywhere else… Will it be convenient to use: nope.

So it really depends on what your target audience is, and what compliance and security policies you have to fulfil.

2 Likes

ohh - i wrote privileged user (aka root) but ment unprivileged user (aka non-root user).

In short: I want to run the app inside the container with a non-root user but maybe(?) have to deal with permissions due to volumes or bind mounts.

So I’m looking for best practices how to deal with unprivileged users inside the container.
Do I even have to care about the permissions in first place?

I know that other images support setting of uid and gid but I don’t know why they do this.

good question - everywhere and convenient for everyome would be nice. :smiley:
or at least a good compromise for most usecases.

currently there are no policies or target environments defined - I’m just checking what is possible and find a solution that works in as much environments as possible.
But I also want to use common patterns / best practices and don’t want to reinvent the wheel. :slight_smile:

Do you know projects that are working with openshift?
Then I could check what they are doing and how they handle UIDs/GIDs.
E.g. for config, log and temp-dirs.

all of them need to be writeable of course where temp and log are filled by the app and the config-dir by the entrypoint (write environment-variable-values into config-files).

edit: I think readonly container-fs is currently not possible - I’ll give it a try later but I know that we’re writing some files on startup.

The majority of images does not work with OpenShift :slight_smile: Most struggle when the container filesystem is read-only. I wrote above what the factors are for an image to run on OpenShift (or to be more precise any Kubernetes distro that applies the restricted pod security standard).

The typical problems are:

  • container needs to be started as root → not possible on OpenShift

    • no chown or chmod → Thus, no convince when dealing with file permissions on volumes
  • entrypoint script or application requires a specific uid → usually not possible

    • this should be no problem, if the next point is properly handled
  • entrypoint script or application wants to modify or write files in the container filesystem → not possible with a read-only filesystem

    • make sure your entrypoint script or application only modifies/writes files located in volumes.

      • config files → render them from a template and write in a volume, as the exising config in the container filesystem can not be modifed (on kubernetes/openshift configs are often rendered using a helm chart, so that the entrypoint script doesn’t need to take care of it). If possible, modify the application to directly make use of environment variables, instead of having them to render into a config.

      • log files → should not be written into files unless required for post-processing, or if multiple log files are needed. Usually writing logs to STDOUT is the cleaner approach, as it allows viewing them with docker logs.

      • obviously persistent data of the application.

There is a reason why I asked about the audience: you need to address the requirements of the user base that has the strictest security and compliance regulations.

1 Like

@meyay thank you very much for your hints and detailed explanations - your help is very appretiated

due to your openshift-hint I set my containers to readonly to identify needed volumes.
I’ve found my expected config, log and temp-dirs.
are there some best practices how to deal with temp-dirs?

after some modifications on my entrypoint-script and with enough volumes :slight_smile: I’m able to start my container with readonly-root-fs. :man_dancing:

I also removed the user mapping at startup to adress the point that it is maybe not possible to run the container as root.
So no chgrp, chmod, usermod on startup.
I’ve used 9081 as uid (USER 9081) and simply start our app in the entrypoint.
with this approach I can also use 0, my own uid and it seems any random uid.
are there some best practices about the used uid in Dockerfile?
some say use uid < 1000, some say they are used by os.
others say use a uid > 9000, some > 10000 and some are using something like 100003456. :thinking:

I know that writing logfiles is bad practice but because our app is a monolithic app that is usually installed on premise it is as it is (at least for now).
we are also spawning multiple processes (each with its own logfile) → I know this is bad practice too.

regarding the config files: I already use configmaps for flexibility but defined also environment-variables with default-values for convenience.
the entrypoint-script reads this variables and creates some minimal config-files.

because of the audience: I would assume that the majority is using AWS

You can test your image with using different user id’s, just use -u ${different user id} (of course, with an actual uid and not the placeholder).

Can’t tell you what the best practice is. Though, I guess I would use an uid >=1000. The bigger uid looks like an arbitrary uid would see in OpenShift :slight_smile:

I should have written target group, instead of audience: home users have other requirement than enterprises :slight_smile:

enterprise - not sure if home users are willing to pay for AWS :slight_smile: