Kubernetes: How to evict a Pod from a node

3 min read | by Jordi Prats

When we don't have the Pod's resources correctly configured we might face the need of moving a Pod to a different node. Although we could change the nodeSelector or adjust the resources to that it gets scheduled on a different node, it might urge us to fix an issue. To do so we can use kubectl drain

At the end of the day what we want it really is "drain the node of that kind of Pods". As kind of by product the node ends up being cordoned so we are sure the Pod won't be scheduled again on the same node.

Let's assume we want to evict this Pod:

$ kubectl get pods -n pet2adm-green -o wide
pet2adm-green         pet2adm-green-adminkube-576fd4df98-hlb2d            2/2     Running             0          147m    10.12.16.57    ip-10-12-16-7.us-west-2.compute.internal     <none>           <none>

If we grep kubectl get pods -A -o wide output for the node we want to evict from we will see that there is another Pod scheduled on the same node:

$ kubectl get pods -A -o wide  | grep ip-10-12-16-7.us-west-2.compute.internal
pet2adm-green         pet2adm-green-adminkube-576fd4df98-hlb2d            2/2     Running             0          147m    10.12.16.57    ip-10-12-16-7.us-west-2.compute.internal     <none>           <none>
spinnaker-green       spinnaker-halyard-0                                 1/1     Running             0          39m     10.12.16.140   ip-10-12-16-7.us-west-2.compute.internal     <none>           <none>

We'll need some label to identify the Pod we want to evict, we can check it's labels using kubectl describe:

$ kubectl describe pod pet2adm-green-adminkube-576fd4df98-hlb2d -n pet2adm-green
Name:         pet2adm-green-adminkube-576fd4df98-hlb2d
Namespace:    pet2adm-green
Priority:     0
Node:         ip-10-12-16-7.us-west-2.compute.internal/10.12.16.7
Start Time:   Fri, 22 Oct 2021 14:03:24 +0200
Labels:       app.kubernetes.io/instance=pet2adm-green
              app.kubernetes.io/name=adminkube
              pod-template-hash=576fd4df98
(...)

Having a label to identify it select, now we can drain the node where the Pod currently sits using the label as a Pod selector:

$ kubectl drain ip-10-12-16-7.us-west-2.compute.internal --pod-selector=app.kubernetes.io/name=adminkube
node/ip-10-12-16-7.us-west-2.compute.internal cordoned
evicting pod pet2adm-green/pet2adm-green-adminkube-576fd4df98-hlb2d
pod/pet2adm-green-adminkube-576fd4df98-hlb2d evicted
node/ip-10-12-16-7.us-west-2.compute.internal evicted

Checking again the pods for that node we will be able to see that the one we wanted o evict is no longer there while the other one is still running on that node:

$ kubectl get pods -A -o wide  | grep ip-10-12-16-7.us-west-2.compute.internal
spinnaker-green       spinnaker-halyard-0                                 1/1     Running             0          41m     10.12.16.140   ip-10-12-16-7.us-west-2.compute.internal     <none>           <none>

Since the node where the Pod was is now cordoned (if it belongs to a Deployment) the new Pod will be scheduled on a different node:

$ kubectl get pods -n pet2adm-green -o wide
NAME                                       READY   STATUS     RESTARTS   AGE    IP              NODE                                           NOMINATED NODE   READINESS GATES
pet2adm-green-adminkube-576fd4df98-22h9f   0/2     Init:2/5   0          2m6s   10.12.16.21   ip-10-12-16-191.us-west-2.compute.internal   <none>           <none>

Once the new Pod is running on a new node, we can now uncordon the node:

$ kubectl uncordon ip-10-12-16-7.us-west-2.compute.internal
node/ip-10-12-16-7.us-west-2.compute.internal uncordoned

Posted on 25/10/2021