2 min read | by Jordi Prats
Using an ALB controller we might face the following error while creating Ingress objects:
$ kubectl describe ingress pet2cattle -n pet2cattle
Name: pet2cattle
Namespace: pet2cattle
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
admin-site.pet2cattle.com
/ ssl-redirect:use-annotation (<error: endpoints "ssl-redirect" not found>)
/ pet2cattle:http (10.103.202.36:9000)
Annotations: alb.ingress.kubernetes.io/actions.ssl-redirect:
{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}
alb.ingress.kubernetes.io/group.name: pet2cattle
alb.ingress.kubernetes.io/listen-ports: [{"HTTP":80},{"HTTPS":443}]
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: pet2cattle
meta.helm.sh/release-namespace: pet2cattle
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBuildModel 16m (x19 over 38m) ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet
This message is telling us that the ALB controller is no able to find the subnets of the requested type. We will have to check the following:
We'll need to make sure to have the following annotations on the Ingress object if we are using private subnets:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
In case we are using a public-facing subnets we'll need to update alb.ingress.kubernetes.io/scheme to internet-facing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
Then, we will have to check the subnets: The ALB controller require them to be properly tagged with the cluster name that is allowed to use them. So, just as we did with the Ingress annotations, we'll need to make sure it has the following tags on the AWS resource for private subnets:
kubernetes.io/cluster/$CLUSTER_NAME shared
kubernetes.io/role/internal-elb 1
In case they are in a public segment, the tags need to look like this:
kubernetes.io/cluster/$CLUSTER_NAME shared
kubernetes.io/role/elb 1
For further information, we can check the AWS knowledge-center and the AWS EKS documentation regarding VPC/subnet tagging
Posted on 12/07/2021