2 min read
Every application will, eventually, fail. In order to detect that the container is failing and being able to recover this situation by restarting it we can use the livenessProbe.
23/08/2021
Read more...4 min read
If we want just a subset of Pods to be able to be scheduled on a given node we can achieve it using taints and tolerations
With a taint we can tell the cluster not to schedule Pods on this node, but with a toleration on a Pod we can allow it to tolerate this taint
20/08/2021
Read more...2 min read
For some applications we might want to avoid having two or more Pods belonging to the same Deployment to be scheduled on different nodes, yet we don't need them to be a DaemonSet. Let's use as an example the cluster autoscaler: We would like to have two replicas but not on the same node, since if we are draining the node an there's not enough capacity on the other nodes with both Pods offline a manual intervention would be required to spawn a new node
$ kubectl get pods -n autoscaler -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
autoscaler-aws-cluster-autoscaler-585cc546dd-jc46d 1/1 Running 0 16h 10.103.195.47 ip-10-12-16-10.eu-west-1.compute.internal <none> <none>
autoscaler-aws-cluster-autoscaler-585cc546dd-s4j2r 1/1 Running 0 16h 10.103.195.147 ip-10-12-16-10.eu-west-1.compute.internal <none> <none>
To do so we will have to configure affinity
11/08/2021
Read more...2 min read
Starting from Kubernetes v1.20 we can configure a startup Probe: It will check for containers to be come into service, disabling liveness and readiness checks until it succeeds.
05/08/2021
Read more...2 min read
In Kubernetes we can configure a PodDisruptionBudgets (PDB) to tell the cluster for a given set of Pods how they can tolerate interruptions (such as application upgrades) maintaining it's general availability.
This Kubernetes object has graduated to GA in Kubernetes v1.21
03/08/2021
Read more...