How to manage Docker registry with multiple on-premise customers and logins?

We currently get requests from SaaS customers which want to move services on premise.

What’s the best solution to have a self-hosted Docker registry (registry:2?) to be used with multiple clients?

Use a http proxy with http auth in front of it, giving every customer a login, so when they terminate the contract, they can be excluded from updates?

1 Like

You might want to take a look at harbor or Artifactory Container Registstry.

Both have build-in user management.

hello,
Managing a self-hosted Docker registry for multiple on-premise customers while ensuring security and ease of management can be a complex task. To achieve this, you can consider the following best practices:

Use an HTTP Proxy with Authentication:

Set up an HTTP proxy in front of your Docker registry (e.g., Nginx, Apache) that enforces HTTP authentication. This proxy acts as a security barrier, ensuring that only authorized users can access the Docker images.
Individual Customer Logins:

Provide each customer with unique login credentials (username and password) to access the HTTP proxy. This ensures that customers can only access the Docker registry if they have valid credentials.
Role-Based Access Control (RBAC):

Implement role-based access control within the proxy. Assign different access levels or permissions to customers based on their requirements. For example, some customers may have read-only access, while others may have read and write privileges.
Isolate Customer Repositories:

Organize your Docker registry to isolate customer repositories. This can be achieved by creating separate namespaces or repositories for each customer. This isolation ensures that customers cannot accidentally access or modify each other’s images.
Monitoring and Auditing:

Implement monitoring and auditing tools to track registry usage and user activity. This allows you to detect any unauthorized access or suspicious activities promptly.
Customer Offboarding:

When a customer terminates their contract, promptly revoke their access credentials in the HTTP proxy. Additionally, consider archiving or deleting their Docker images to free up storage space, depending on your data retention policies.
Backup and Disaster Recovery:

Regularly back up your Docker registry and maintain a disaster recovery plan. This ensures that customer data is protected and can be restored in case of unforeseen events.
Encryption:

Implement encryption for data in transit and at rest to enhance the security of your Docker registry. TLS/SSL certificates should be used for securing communication between clients and the registry.
Authentication Mechanisms:

Consider using more robust authentication mechanisms, such as token-based authentication, if your Docker registry software supports it. This can provide an additional layer of security and flexibility.
Documentation and Training:

Provide clear documentation and training for your customers on how to interact with the Docker registry and use the provided login credentials. Educated users are less likely to make security mistakes.

That sounds very much like an AI answer :wink:

I can use ChatGPT and Bard myself, I already suggested a http reverse proxy up front, so nothing new here.

I rather wanted to hear what the community things about it and if other solutions are available.

1 Like

Yup, the response was posted 1 minute after joining the forum.

I once used GitHub - SUSE/Portus: Authorization service and frontend for Docker registry (v2) and even configured a reverse proxy in front of it so I could use the same port for the registry and the webinterface (it wasn’t supported by default). That was tricky because of the similar urls, but then someone showed me Harbor which is really good. You can push docker images, helm charts or any binary in fact.

Portus was discontinued by SUSE and in April this year they archived the project on GitHub, so I don’t know a better opensource registry than Harbor.

Harbor also supports Cache proxies so you can configure it to be proxy for Docker Hub. When you try to pull an image, it will cache it and next time it doesn’t have to access Docker Hub if the layers are already in the cache.

Well, the correct answers are still correct