List a ROSA cluster in the EKS interface using the EKS connector

3 min read | by Jordi Prats

With the EKS connector you are going to be able to connect any Kubernetes cluster to the AWS EKS console to visualize it's status, configuration, nodes and workloads but not much else. Let's take a look what's needed:

First you'll have to make sure you have the AWSServiceRoleForAmazonEKSConnector role on your account, for this you can use awstools:

$ awstools iam role AWSServiceRoleForAmazonEKSConnector
/aws-service-role/eks-connector.amazonaws.com/AWSServiceRoleForAmazonEKSConnector A6YZRRTY42KEXYJJPOA5Q     arn:aws:iam::567894321463:role/aws-service-role/eks-connector.amazonaws.com/AWSServiceRoleForAmazonEKSConnector

If you don't have it, you can request it's creation using AWS cli as follows:

$ aws iam create-service-linked-role --aws-service-name eks-connector.amazonaws.com

The next thing you'll need is an IAM role that is able to talk to the SSM with the following trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SSMAccess",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "ssm.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

And with the following permission policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SsmControlChannel",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel"
            ],
            "Resource": "arn:aws:eks:*:*:cluster/*"
        },
        {
            "Sid": "ssmDataplaneOperations",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenDataChannel",
                "ssmmessages:OpenControlChannel"
            ],
            "Resource": "*"
        }
    ]
}

Let's assume the role have the following ARN: arn:aws:iam::567894321463:role/eks-connector-role

We can use awstools eks register setting the role with the --role option. For this example we are going to register it as an OpenShift cluster:

$ awstools eks register test --role arn:aws:iam::567894321463:role/eks-connector-role --provider OPENSHIFT
activationId                                                 activationCode
f722f2ca-5ec3-4659-88ac-ba9384557c64                         oAAEcYTTbTeHdtnyOmRo

We'll need the activationId and the activationCode to configure the eks-connector StatefulSet. To generate the manifest we need to push to the Kubernetes cluster we can use the awstools eks get-connector-manifest. To be able to see it on the AWS console we'll need to configure the role we are using. If we are unsure we can just skip the --role option, once the service is running on the cluster on the console it is going to complain about the role that it is unauthorized. Once we have it we can reapply the manifest with the role option.

$ awstools eks get-connector-manifest f722f2ca-5ec3-4659-88ac-ba9384557c64 oAAEcYTTbTeHdtnyOmRo --role "arn:aws:iam::567894321463:role/UserSSOMapping"| kubectl apply -f -
namespace/eks-connector created
secret/eks-connector-activation-config created
role.rbac.authorization.k8s.io/eks-connector-secret-access created
serviceaccount/eks-connector created
secret/eks-connector-token created
rolebinding.rbac.authorization.k8s.io/eks-connector-secret-access created
configmap/eks-connector-agent created
statefulset.apps/eks-connector created
clusterrolebinding.rbac.authorization.k8s.io/eks-connector-service created
clusterrole.rbac.authorization.k8s.io/eks-connector-service created
clusterrole.rbac.authorization.k8s.io/eks-connector-console-dashboard-full-access-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/eks-connector-console-dashboard-full-access-clusterrole-binding created

On an OpenShift cluster we'll have to grant privileges to the eks-connector ServiceAccount:

$ oc adm policy add-scc-to-user privileged system:serviceaccount:eks-connector:eks-connector
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "eks-connector"

With all this configured we'll be able to see the cluster listed as any other EKS cluster but we won't be able to manage it or perform any action. For this example we are using a ROSA cluster that has an OIDC but it's not available using the EKS API:

$ awstools eks describe test
{
  "arn": "arn:aws:eks:eu-central-1:567894321463:cluster/test",
  "connectorConfig": {
    "activationExpiry": "2022-09-09 22:35:20.810000+02:00",
    "activationId": "f722f2ca-5ec3-4659-88ac-ba9384557c64",
    "provider": "OPENSHIFT",
    "roleArn": "arn:aws:iam::567894321463:role/eks-connector-role"
  },
  "createdAt": "2022-09-06 22:35:21.336000+02:00",
  "name": "test",
  "status": "ACTIVE",
  "tags": {}
}

Nevertheless, although we'll be able to see the cluster listed we won't be able to do much with it: Basically you'll be able to see nodes and workloads (Kubernetes objects).


Posted on 12/09/2022