kubernetes: Get events across multiple objects

4 min read | by Jordi Prats

The most commonly used way to get events is by using kubectl describe on each object like this:

$ kubectl describe pod pet2cattle-6597f8464d-hgxpp
Name:         pet2cattle-6597f8464d-hgxpp
(...)
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  3m47s  default-scheduler  Successfully assigned kube-system/pet2cattle-6597f8464d-hgxpp to scopuli.lolcathost.systemadmin.es
  Normal   Pulled     3m46s  kubelet            Container image "172.18.1.46:5000/p2c:3.44" already present on machine
  Normal   Created    3m46s  kubelet            Created container pet2cattle-sitemap
  Normal   Started    3m46s  kubelet            Started container pet2cattle-sitemap
  Normal   Pulled     3m41s  kubelet            Container image "172.18.1.46:5000/p2c:3.44" already present on machine
  Normal   Created    3m41s  kubelet            Created container pet2cattle-indexer
  Normal   Started    3m40s  kubelet            Started container pet2cattle-indexer
  Normal   Pulled     3m32s  kubelet            Container image "172.18.1.46:5000/p2c:3.44" already present on machine
  Normal   Created    3m32s  kubelet            Created container pet2cattle
  Normal   Started    3m31s  kubelet            Started container pet2cattle
  Warning  Unhealthy  3m26s  kubelet            Liveness probe failed: Get http://10.42.0.8:8000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

It's quite convenient when we are looking for events related to a given but becomes a pain if we need to see how the events are triggered on multiple objects.

Another way is to get the events directly from the resources API. This can be done using kubectl get events. It will display recent events for all resources in the system:

$ kubectl get events
LAST SEEN   TYPE      REASON              OBJECT                             MESSAGE
19m         Normal    Issuing             certificate/le-crt                 Issuing certificate as Secret does not exist
19m         Normal    Generated           certificate/le-crt                 Stored new private key in temporary Secret resource "le-crt-fzrzm"
19m         Warning   Failed              certificate/le-crt                 The certificate request has failed to complete and will be retried: The CSR PEM requests a commonName that is not present in the list of dnsNames or ipAddresses. If a commonName is set, ACME requires that the value is also present in the list of dnsNames or ipAddresses: "ampa.systemadmin.es" does not exist in [] or []
19m         Normal    Requested           certificate/le-crt                 Created new CertificateRequest resource "le-crt-qlqgt"
114s        Normal    ScalingReplicaSet   deployment/pet2cattle              Scaled up replica set pet2cattle-6597f8464d to 2
114s        Normal    SuccessfulCreate    replicaset/pet2cattle-6597f8464d   Created pod: pet2cattle-6597f8464d-hgxpp
114s        Normal    Scheduled           pod/pet2cattle-6597f8464d-hgxpp    Successfully assigned kube-system/pet2cattle-6597f8464d-hgxpp to scopuli.lolcathost.systemadmin.es
113s        Normal    Pulled              pod/pet2cattle-6597f8464d-hgxpp    Container image "172.18.1.46:5000/p2c:3.44" already present on machine
113s        Normal    Created             pod/pet2cattle-6597f8464d-hgxpp    Created container pet2cattle-sitemap
113s        Normal    Started             pod/pet2cattle-6597f8464d-hgxpp    Started container pet2cattle-sitemap
108s        Normal    Pulled              pod/pet2cattle-6597f8464d-hgxpp    Container image "172.18.1.46:5000/p2c:3.44" already present on machine
108s        Normal    Created             pod/pet2cattle-6597f8464d-hgxpp    Created container pet2cattle-indexer
107s        Normal    Started             pod/pet2cattle-6597f8464d-hgxpp    Started container pet2cattle-indexer
99s         Normal    Pulled              pod/pet2cattle-6597f8464d-hgxpp    Container image "172.18.1.46:5000/p2c:3.44" already present on machine
99s         Normal    Created             pod/pet2cattle-6597f8464d-hgxpp    Created container pet2cattle
98s         Normal    Started             pod/pet2cattle-6597f8464d-hgxpp    Started container pet2cattle
93s         Warning   Unhealthy           pod/pet2cattle-6597f8464d-hgxpp    Liveness probe failed: Get http://10.42.0.8:8000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

As you can see here we can see events from a certificate, a deployment, a replicaset and a pod. The events related to the certificate are unrelated since there's a significant time difference. But we can see how my modifying the deployment it triggers a change on the replicaset which in turn creates a new pod.

We can see better see it in action triggering some events by rescaling a deployment:

$ kubectl scale deploy sonarqube-sonarqube --replicas=2
deployment.apps/sonarqube-sonarqube scaled

Checking again the events we can see what this command have triggered:

$ kubectl get events
LAST SEEN   TYPE     REASON                   OBJECT                                                MESSAGE
46s         Normal   SuccessfullyReconciled   targetgroupbinding/k8s-sonarqub-sonarqub-310a03dce0   Successfully reconciled
51s         Normal   Scheduled                pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Successfully assigned sonarqube/sonarqube-sonarqube-86d6fb8b6d-zsfdt to tachi.pet2cattle.com
49s         Normal   Pulling                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Pulling image "busybox:1.32"
48s         Normal   Pulled                   pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Successfully pulled image "busybox:1.32"
47s         Normal   Created                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Created container init-sysctl
47s         Normal   Started                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Started container init-sysctl
46s         Normal   Pulling                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Pulling image "rjkernick/alpine-wget:latest"
45s         Normal   Pulled                   pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Successfully pulled image "rjkernick/alpine-wget:latest"
44s         Normal   Created                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Created container install-plugins
43s         Normal   Started                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Started container install-plugins
37s         Normal   Pulling                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Pulling image "sonarqube:8.4.2-community"
25s         Normal   Pulled                   pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Successfully pulled image "sonarqube:8.4.2-community"
13s         Normal   Created                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Created container sonarqube
12s         Normal   Started                  pod/sonarqube-sonarqube-86d6fb8b6d-zsfdt              Started container sonarqube
51s         Normal   SuccessfulCreate         replicaset/sonarqube-sonarqube-86d6fb8b6d             Created pod: sonarqube-sonarqube-86d6fb8b6d-zsfdt
51s         Normal   ScalingReplicaSet        deployment/sonarqube-sonarqube                        Scaled up replica set sonarqube-sonarqube-86d6fb8b6d to 2

Even though it can give you an overview of what's going on, to get the details we will still have to using kubectl describe on the relevant object


Posted on 15/03/2021