How to install SecurityGroups for Pods on an AWS EKS clusters

SecurityGroups for Pods AWS EKS

4 min read | by Jordi Prats

To be able to configure SecurityGroups at the Pod level, we have two very differentiated tasks:

Let's take a look at what needs to be done on the cluster side.

First we will have to review the AWS documentation on SecurityGroups for Pods. The most critical part to review is the list of supported instance types: If we are using a instance type for the worker nodes that is not present on this list it won't work, so we need to make sure we use one of them. Furthermore, each instance type has a limit on how many eni we can create so it is also going to limit the number of Pods with SecurityGroups configured for each node. We will have to plan accordingly

Once we are sure the worker nodes will have enough capacity and are of the right type, we will have to also check the vpc-cni version:

vpc-cni version

$ kubectl get daemonset aws-node --namespace kube-system -o jsonpath='{.spec.template.spec.containers[0].image}'
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.5-eksbuild.1

According to the AWS documentation it needs to be at least 1.7.7. Since on the previous example we are running a 1.7.5 we will have to update it. To do so we can make use of the EKS addons. If we are managing our cluster using terrafrom we can add the aws_eks_addon like this:

resource "aws_eks_addon" "vpc_cni" {
  cluster_name  = "demoeks"
  addon_name    = "vpc-cni"
  addon_version = "v1.7.10-eksbuild.1"

  resolve_conflicts = "OVERWRITE"
}

As soon as it have been applied we can check again the image versions to make sure we are running a supported version:

$ kubectl get daemonset aws-node --namespace kube-system -o jsonpath='{.spec.template.spec.containers[0].image}'
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.10-eksbuild.1

Cluster IAM role

We will have to also attach the following AWS managed policy: AmazonEKSVPCResourceController

Again, using terraform it's quite straightforward:

resource "aws_iam_role_policy_attachment" "AmazonEKSVPCResourceController" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  role       = aws_iam_role.cluster.id
}

Patch aws-node DaemonSet

We will have to also patch a couple of values on the aws-node daemonset.

We can check if the value is already set to true using this command:

$ kubectl get daemonset aws-node -n kube-system -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "ENABLE_POD_ENI")].value}'
false

If it is not, we can set it to true using:

kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true

For the initContainer we can also check using this:

$ kubectl get daemonset aws-node -n kube-system -o jsonpath='{.spec.template.spec.initContainers[?(@.name == "aws-vpc-cni-init")].env[?(@.name == "DISABLE_TCP_EARLY_DEMUX")].value}'
false

And set it to true using kubectl patch:

kubectl patch daemonset aws-node -n kube-system -p '{"spec": {"template": {"spec": {"initContainers": [{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}]}}}}'

Worker node SecurityGroup

As per the AWS documentation:


The cluster security group must also allow inbound TCP and UDP port 53 communication from at least one security group associated to pods.


There are several approaches to do this, we can either register all the Pod's SecurityGroups by creating a rule:

resource "aws_security_group_rule" "worker_ingress_dns_tcp_from_pods_security_group" {
  security_group_id        = aws_security_group.worker.id
  description              = "tcp CoreDNS sg4pods"
  type                     = "ingress"

  from_port                = 53
  to_port                  = 53
  protocol                 = "tcp"

  source_security_group_id = var.pod_sg
}

resource "aws_security_group_rule" "worker_ingress_dns_udp_from_pods_security_group" {
  security_group_id        = aws_security_group.worker.id
  description              = "udp CoreDNS sg4pods"
  type                     = "ingress"

  from_port                = 53
  to_port                  = 53
  protocol                 = "udp"

  source_security_group_id = var.pod_sg
}

Or simply allow all the IPs on the subnet using it's CIDR:

resource "aws_security_group_rule" "worker_ingress_dns_tcp_from_pods_security_group" {
  security_group_id        = aws_security_group.worker.id
  description              = "tcp CoreDNS sg4pods"
  type                     = "ingress"

  from_port                = 53
  to_port                  = 53
  protocol                 = "tcp"

  cidr_blocks              = [var.cluster_cidr]
}

resource "aws_security_group_rule" "worker_ingress_dns_udp_from_pods_security_group" {
  security_group_id        = aws_security_group.worker.id
  description              = "udp CoreDNS sg4pods"
  type                     = "ingress"

  from_port                = 53
  to_port                  = 53
  protocol                 = "udp"

  cidr_blocks              = [var.cluster_cidr]
}

Either way, the Pods need to communicate with the CoreDNS Pod to be able to resolve internal DNS names

Once the cluster configuration is completed, you can continue with the SecurityGroup and Pod configuration


Posted on 25/08/2021