2 min read | by Jordi Prats
If you try to run too many pods on a handful of nodes you might eventually run out of available Pods. Using kubectl get pods you'll see them marked with the status OutOfpods:
$ kubectl get pods (...) test deploy-test-84b4fdcbbd-59hvf 0/1 ContainerCreating 0 44s test deploy-test-84b4fdcbbd-7dvs9 0/1 OutOfpods 0 62s test deploy-test-84b4fdcbbd-btrwz 0/1 OutOfpods 0 4m16s test deploy-test-84b4fdcbbd-gpkkg 0/1 OutOfpods 0 91s test deploy-test-84b4fdcbbd-hbbdv 0/1 OutOfpods 0 67s test deploy-test-84b4fdcbbd-j75x4 0/1 OutOfpods 0 68s test deploy-test-84b4fdcbbd-s4qzz 0/1 OutOfpods 0 64s (...)
Each Kubernetes flavor might have it's own way of changing the maximum number of Pods but there are some hard limits. For example, due to the available addresses, on GKE there's a hard limit of just 110 pods per node.
If we take a look at the Kubernetes documentations for large clusters it specifies that is is designed to handle configurations that meet all of the following criteria:
If we try to push these limits we might hit other issues
Posted on 11/03/2022