4 min read
While trying to deploy Pods we might notice the on the Events section that Pod cannot be scheduled due to a volume node affinity conflict:
$ kubectl describe pod website-365-flask-ampa2-ha-member-1 -n website-365
Name: website-365-flask-ampa2-ha-member-1
Namespace: website-365
Priority: 0
Node: <none>
Labels: (...)
Annotations: (...)
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/website-365-flask-ampa2-ha-member
Init Containers:
(...)
Containers:
(...)
Conditions:
Type Status
PodScheduled False
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-website-365-flask-ampa2-ha-member-1
ReadOnly: false
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 31m (x20835 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict
Normal NotTriggerScaleUp 95s (x46144 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict, 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate
Warning FailedScheduling 64s (x2401 over 43h) default-scheduler 0/4 nodes are available: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
27/04/2022
Read more...5 min read
If you run Kubernetes workloads on AWS you want to make sure Pods are spread across all the available availability zones. To do so we can use podAntiAffinity to tell Kubernetes to avoid deploying all the Pods of the same deployment on the same AZ
28/03/2022
Read more...