3 min read
When we don't have the Pod's resources correctly configured we might face the need of moving a Pod to a different node. Although we could change the nodeSelector or adjust the resources to that it gets scheduled on a different node, it might urge us to fix an issue. To do so we can use kubectl drain
At the end of the day what we want it really is "drain the node of that kind of Pods". As kind of by product the node ends up being cordoned so we are sure the Pod won't be scheduled again on the same node.
25/10/2021
Read more...3 min read
If you are using a mixed policy on your EKS workers ASG you will want to install the AWS node termination handler to drain a node once AWS notifies that a particular spot instance is going to be reclaimed
29/09/2021
Read more...2 min read
While draining a node it might fail with the message cannot delete Pods with local storage as follows:
$ kubectl drain tycho.pet2cattle.com --ignore-daemonsets
node/tycho.pet2cattle.com already cordoned
error: unable to drain node "tycho.pet2cattle.com", aborting command...
There are pending nodes to be drained:
tycho.pet2cattle.com
error: cannot delete Pods with local storage (use --delete-emptydir-data to override): spinnaker-ampa/spin-rosco-658fdb4694-v99jt
02/08/2021
Read more...2 min read
You can use kubectl drain to evict pods from a node and mark it as unschedulable to prevent new pods from get created on this node. It will allow the pod's containers to gracefully terminate, respecting the PodDisruptionBudgets with a few exceptions. Let's test it suing the following nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nauvoo.pet2cattle.com Ready control-plane,master 19d v1.20.4+k3s1
tycho.pet2cattle.com Ready <none> 26s v1.20.4+k3s1
14/04/2021
Read more...