What Are Kubernetes Events and How to Collect Them
Kubernetes events record state changes such as pod scheduling, image pulling, and failures, which can be inspected via kubectl but are retained only an hour, so tools like kube-eventer or kubernetes-event-exporter collect them for long‑term analysis, enabling monitoring of Warning types, failure reasons, and visualization through Grafana dashboards.
Kubernetes Event is a report of a state change in the cluster. Components report runtime events to the API server, which stores them in the master etcd. To avoid excessive memory usage, events are retained for only the most recent hour by default.
When a pod is created, you can see a series of events such as Scheduled , Pulling , Pulled , Created , and Started . If any step fails, the corresponding event makes the problem visible.
Viewing events for a specific resource :
kubectl describe pod pod-name
kubectl describe hpa hpa-name
To view a single event in JSON format:
kubectl get event -n php-app "gxxx-xxxs-hpa.17834ddfc52766a6" -o json
To list all events in a namespace:
kubectl get events -n namespace
Because command‑line inspection is inconvenient and events expire after one hour, it is common to collect them into a log system for long‑term analysis.
Tools and methods for event collection :
Aliyun open‑source kube-eventer – collects Kubernetes events and forwards them to Alibaba Cloud SLS.
Opsgenie open‑source kubernetes-event-exporter – can export events to Loki or other back‑ends.
Example deployment of kube-eventer on a self‑built cluster:
Create a Logstore named k8s-event .
Obtain an Alibaba Cloud AccessKey ID/Secret with SLS permissions.
Configure the deployment with parameters such as endpoint , project , logStore , regionId , accessKeyId , and accessKeySecret (replace the -sink= argument accordingly).
Verify the collector is running:
kubectl get pod -n kube-system | grep kube-eventer
Each event is stored as a JSON object with fields like:
involvedObject.namespace – the namespace of the event.
involvedObject.name – the name of the affected resource.
reason – why the event occurred (focus on reasons containing “fail”).
message – detailed description.
source.component – component that generated the event.
firstTimestamp / lastTimestamp – start and most recent times.
count – how many times the event has occurred.
type – Normal or Warning (monitor Warning events).
Key events to watch :
Type – only Warning events need attention.
Reason – look for reasons containing “fail”.
Component – the originating component.
Count – frequency of the event.
Building a Grafana dashboard to visualize events:
Overview – trend of total events and spikes of Warning types, top reasons.
Component statistics – node‑related, pod‑related, container‑related events.
Raw event details – drill‑down to individual event records.
Kubernetes defines only two event types, Normal and Warning . The source code for these definitions can be found at k8s.io/api/core/v1/types.go . Additional event reasons are listed in the kubelet source event.go .
37 Interactive Technology Team
37 Interactive Technology Center
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.