Kubernetes: Managing tenants with Capsule

Kubernetes Capsule multi-tenant

7 min read | by Jordi Prats

Once we have Capsule setup we'll need to start managing the tenants and their permissions. In this post, we'll see how to assign permissions to a user, cordoning a tenant, and enforcing resource quotas at the tenant level.

Assign tenant-admin permissions to a user

By default, when we add a user to the owners list, they are going to be able to act as the admin of all the tenant namespaces:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User

Writing the the above manifest is equivalent to setting clusterRoles to admin and capsule-namespace-deleter:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User
      clusterRoles:
      - admin
      - capsule-namespace-deleter

The capsule controller is going to translate this definition into the following RoleBindings:

$ kubectl get rolebinding
NAME                                       ROLE                                    AGE
capsule-demo-0-admin                       ClusterRole/admin                       2d20h
capsule-demo-1-capsule-namespace-deleter   ClusterRole/capsule-namespace-deleter   2d20h

Assing view-only permissions to a user

We can also create a user and scope it to view-only permissions:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User
    - name: view-only
      kind: User
      clusterRoles:
        - view

In this case, the capsule controller is going to create a RoleBinding with the view ClusterRole:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2025-02-23T10:27:02Z"
  labels:
    capsule.clastix.io/role-binding: eaaa0637389ad533
    capsule.clastix.io/tenant: demo
  name: capsule-demo-2-view
  namespace: jordi-1st-ns
  ownerReferences:
  - apiVersion: capsule.clastix.io/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Tenant
    name: demo
    uid: 90a1f6a1-2284-44dd-8f58-2b9f75b8616f
  resourceVersion: "37840"
  uid: 6eaec6f6-7718-44ba-bca5-310479d368ce
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: view-only

By automatically creating a RoleBinding to all the namespaces owned by the tenant, the user is going to be able to list resources in all the tenant namespaces but not in any other namespaces:

$ KUBECONFIG=view-only-demo.kubeconfig kubectl auth can-i get pod -n jordi-1st-ns
yes
$ KUBECONFIG=view-only-demo.kubeconfig kubectl auth can-i get pod -n kube-system
no

We can also define RoleBindings using the tenant.spec.additionalRoleBindings field, it is intended to be used for much more generic use cases:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User
  additionalRoleBindings:
    - clusterRoleName: 'view'
      subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: additional-viewer

In this case, we are going to achieve the same result as before. Using the additionalRoleBindings field is a way to make sure we are not assigning unintended permissions to the user since the default list of clusterRoles include admin permissions to the tenant.

$ kubectl get rolebinding
NAME                                       ROLE                                    AGE
capsule-demo-0-admin                       ClusterRole/admin                       2d20h
capsule-demo-1-capsule-namespace-deleter   ClusterRole/capsule-namespace-deleter   2d20h
capsule-demo-2-view                        ClusterRole/view                        16m
capsule-demo-3-view                        ClusterRole/view                        4s
$ kubectl get rolebinding capsule-demo-3-view -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2025-02-23T10:43:47Z"
  labels:
    capsule.clastix.io/role-binding: 3bfe3bdd56040ecf
    capsule.clastix.io/tenant: demo
  name: capsule-demo-3-view
  namespace: jordi-1st-ns
  ownerReferences:
  - apiVersion: capsule.clastix.io/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Tenant
    name: demo
    uid: 90a1f6a1-2284-44dd-8f58-2b9f75b8616f
  resourceVersion: "39661"
  uid: 6982812b-7cd1-4ea8-808b-7baf5a7e5403
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: additional-viewer

Cordoning a tenant

In the same way we can cordon a node, we can cordon a tenant to prevent any workload to be updated. We might want to do this, for example, during cluster upgrades or production freezes:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  cordoned: true
  owners:
    - name: jordi
      kind: User

As soon as the Tenant is cordoned, the admission controller is going to prevent any update to the tenant workloads:

$ kubectl get tenant
NAME   STATE      NAMESPACE QUOTA   NAMESPACE COUNT   NODE SELECTOR   AGE
demo   Cordoned                     1                                 2d20h
$ KUBECONFIG=jordi-demo.kubeconfig kubectl auth can-i update pod -n jordi-1st-ns
yes
$ KUBECONFIG=jordi-demo.kubeconfig kubectl edit pod demo
Error from server (Forbidden): pods "demo" is forbidden: User "jordi" cannot get resource "pods" in API group "" in the namespace "default"
$ KUBECONFIG=jordi-demo.kubeconfig kubectl create ns jordi-2nd-ns
Error from server (Forbidden): admission webhook "namespaces.projectcapsule.dev" denied the request: the selected Tenant is freezed

Adding tenant-admin permissions to a service account

We can also have a service account acting as a tenant owner as follows:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User
    - name: system:serviceaccount:argocd:default
      kind: ServiceAccount

The service account needs to be part of the Capsule group, so we'll have to update the CapsuleConfiguration object to include the service account group to the list of user groups:

apiVersion: capsule.clastix.io/v1beta2
kind: CapsuleConfiguration
metadata:
  annotations:
    meta.helm.sh/release-name: capsule
    meta.helm.sh/release-namespace: capsule-system
  creationTimestamp: "2025-02-22T13:37:53Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: capsule
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: capsule
    app.kubernetes.io/version: 0.7.0
    helm.sh/chart: capsule-0.7.0
  name: default
  resourceVersion: "554"
  uid: e16fdd92-0aa1-4149-b95d-51ba3f08272d
spec:
  enableTLSReconciler: true
  forceTenantPrefix: false
  nodeMetadata:
    forbiddenAnnotations:
      denied: []
      deniedRegex: ""
    forbiddenLabels:
      denied: []
      deniedRegex: ""
  overrides:
    TLSSecretName: capsule-tls
    mutatingWebhookConfigurationName: capsule-mutating-webhook-configuration
    validatingWebhookConfigurationName: capsule-validating-webhook-configuration
  protectedNamespaceRegex: ""
  userGroups:
  - projectcapsule.dev
  - system:serviceaccount:argocd

Bear in mind that all the service accounts in the same namespace are included in this group.

Enforce resource quotas at the tenant level

One of the features of Capsule is the ability to enforce resource quotas at the tenant level instead of doing so just at the namespace level. We can achieve this by using the resourceQuotas field in the Tenant object:

apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: demo
spec:
  owners:
    - name: jordi
      kind: User
  resourceQuotas:
    scope: Tenant
    items:
      - hard:
          pods: "2"

We can test this by maxing out the number of pods in the tenant in one namespace and trying to create a new pod on a different namespace belonging to the same tenant:

$ kubectl get ns -l capsule.clastix.io/tenant=demo
NAME           STATUS   AGE
jordi-1st-ns   Active   2d21h
jordi-2nd-ns   Active   27s
$ KUBECONFIG=jordi-demo.kubeconfig kubectl run demo-2 --image nginx -n jordi-1st-ns
pod/demo-2 created
$ KUBECONFIG=jordi-demo.kubeconfig kubectl run demo-3 --image nginx -n jordi-1st-ns
Error from server (Forbidden): pods "demo-3" is forbidden: exceeded quota: capsule-demo-0, requested: pods=1, used: pods=2, limited: pods=2
$ KUBECONFIG=jordi-demo.kubeconfig kubectl run demo-3 --image nginx -n jordi-2ns-ns
Error from server (Forbidden): pods is forbidden: User "jordi" cannot create resource "pods" in API group "" in the namespace "jordi-2ns-ns"

Capsule is achieving this by dynamically updating the ResourceQuota object in all the tenant namespaces, while maintaining a global count of the resources used by the tenant as an annotation:

$ kubectl get resourcequota capsule-demo-0 -n jordi-1st-ns -o yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  annotations:
    quota.capsule.clastix.io/hard-pods: "2"
    quota.capsule.clastix.io/used-pods: "2"
  creationTimestamp: "2025-02-23T11:25:41Z"
  labels:
    capsule.clastix.io/resource-quota: "0"
    capsule.clastix.io/tenant: demo
  name: capsule-demo-0
  namespace: jordi-1st-ns
  ownerReferences:
  - apiVersion: capsule.clastix.io/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Tenant
    name: demo
    uid: 90a1f6a1-2284-44dd-8f58-2b9f75b8616f
  resourceVersion: "45001"
  uid: 4b062e21-0643-4c72-8979-500bd03d374d
spec:
  hard:
    pods: "2"
status:
  hard:
    pods: "2"
  used:
    pods: "2"
$ kubectl get resourcequota capsule-demo-0 -n jordi-2nd-ns -o yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  annotations:
    quota.capsule.clastix.io/hard-pods: "2"
    quota.capsule.clastix.io/used-pods: "2"
  creationTimestamp: "2025-02-23T11:28:23Z"
  labels:
    capsule.clastix.io/resource-quota: "0"
    capsule.clastix.io/tenant: demo
  name: capsule-demo-0
  namespace: jordi-2nd-ns
  ownerReferences:
  - apiVersion: capsule.clastix.io/v1beta2
    blockOwnerDeletion: true
    controller: true
    kind: Tenant
    name: demo
    uid: 90a1f6a1-2284-44dd-8f58-2b9f75b8616f
  resourceVersion: "45002"
  uid: d865a8d4-8be0-44f6-b4a6-3757a562c83f
spec:
  hard:
    pods: "0"
status:
  hard:
    pods: "0"
  used:
    pods: "0"

Posted on 27/02/2025

Categories