Case Study: total isolation customers/projets on the same server

Hello everyone

I’m scratching my head trying to find a solution to this problem.

The need is this one:

  • We put in place a solution for a customer, then deliver the solution to the customer who’s the one who should be managing it.
    However, since the customer cannot manage the server, we also offer the management (SysAdmin / Dev)

The plan is this :

  • We cannot connect to the server.
    When we need to operate the server, the customer enables us somewhere, and we can connect.
    This is a legal requirement in our country (health organisation)

So far we have used one (actually a set of 2) VMs par customer, so customer were also isolated between each other.
.
.

But we have so many customer coming, we need to move that to containers.

So the need would be this one:

  • I can manage and access the host VM as much as I want
    But I cannot access the containers.

Plan
1.The data is unreachable from outside the container

  • As the sysadmin, I cannot see the data.
    Other customers who “escape” their container cannot see the data

2.I cannot “physically” connect to the container without the permission of the customer

  • ie
    docker exec -it <container> /bin/bash
    would not be possible unless the customer lets me in.

.
.
For the first one, I’m thinking about a simple encryption of the data. The data would be seen unencrypted from inside the container, but encrypted from outside (performance ?)

For the second one, I don’t see how I can do it.
For example, allowing a user to access the docker socket would allow this user to access ALL the containers of all the customer. But I want to be able to access just one.

.
So please, if you have ideas on the technical challenge, let me know!

Thanks in advance :slight_smile:

Uhm, a container is nothing else then a fenced process on the host kernel, limited by Namespaces (isolated process areas) , Capabllities (permissions) and CGroups (ressource).

Thus said, #1 will require a bunch of extra capabilities if you didn’t implement the encyption directly in your application. This will weaken the isolation and have an impact regarding when the user manges to escape the container.A simpler approach to protect data amongst users is to rely on UID:GID of your customer applications and the volumes they use for persisted data. Pick distinct IDS per customer and you should be good… Unless they managed to exploit the non privileged user and become root…

When it commes to #2: this is not going to be that easy. Docker ce is not capable to do so, unless you implement your own wrapper that verifies permissions before it delegates the actual calls to the docker socket/port.

Just an Idea:
In Kubernetes user management is intended to be handled outside the cluster (people usual cheat by using service accounts instead), for instance with oidc providers. You could create namespaces per customer and define required role mappings or audience tokens in your oidc token to grant permissions to perform actions on the namespace. Though, never had the situation to have “sub-admins” that add/delete role mappings for oidc users inside a single realm (which your kubernetes cluster would be a client in). You should give it a thought to move to kubernetes.

Belated response, but it seems the new Sysbox runtime could help you here. It enables Docker to launch “VM-like” containers, inside of which you can run systemd + Docker itself, in total isolation from the underlying host (and much more efficiently than with actual VMs). It does this using rootless containers (i.e., without --privileged flag).

I am thinking you could create one (or more) of these containers per customers and give them access to it. Each customer can then use the container much like a VM (including running Docker and deploying inner containers).