Stream Kubernetes Events to Elasticsearch with Kafka & Logstash
This guide walks you through deploying the Kubernetes Event Exporter, packaging its Helm chart and Docker image, configuring Kafka TLS secrets, setting up Logstash to ingest events and forward them to Elasticsearch, creating an index template, and verifying the end‑to‑end pipeline.
Deploy Event Collector
The Kubernetes Event Exporter captures transient cluster events (pod rescheduling, node image GC failures, HPA triggers, etc.) via the events API, allowing you to persist and analyze them.
Download Helm Chart and Image
<code>helm repo add bitnami "https://helm-charts.itboon.top/bitnami" --force-update
helm pull bitnami/kubernetes-event-exporter --version 3.4.1
helm push kubernetes-event-exporter-3.4.1.tgz oci://core.jiaxzeng.com/plugins</code> <code>sudo docker pull bitnami/kubernetes-event-exporter:1.7.0-debian-12-r27
sudo docker tag bitnami/kubernetes-event-exporter:1.7.0-debian-12-r27 core.jiaxzeng.com/library/kubernetes-event-exporter:1.7.0-debian-12-r27
sudo docker push core.jiaxzeng.com/library/kubernetes-event-exporter:1.7.0-debian-12-r27</code>Configure Event Collector
Extract CA, certificate, and private key from a PKCS#12 keystore.
Create a Kubernetes secret containing the extracted files.
Adjust the Helm values to point to the private image registry, set replica count, resource limits, logging, and Kafka sink configuration.
<code># Get private key
openssl pkcs12 -in /app/kafka/pki/kafka.server.keystore.p12 -nocerts -nodes -out /tmp/private.key
# Get client certificate
openssl pkcs12 -in /app/kafka/pki/kafka.server.keystore.p12 -clcerts -nokeys -out /tmp/kafka-client.crt
# Get CA certificate
openssl pkcs12 -in /app/kafka/pki/kafka.server.keystore.p12 -cacerts -nokeys -chain -out /tmp/kafka-ca.crt</code> <code>kubectl -n obs-system create secret generic kafka-ssl-secret \
--from-file=/tmp/kafka-ca.crt \
--from-file=/tmp/kafka-client.crt \
--from-file=/tmp/kafka-client.key</code> <code># values.yaml excerpt
global:
security:
allowInsecureImages: true
image:
registry: core.jiaxzeng.com
repository: library/kubernetes-event-exporter
tag: 1.7.0-debian-12-r27
fullnameOverride: kubernetes-event-exporter
replicaCount: 2
resources:
requests:
cpu: 1
memory: 512Mi
limits:
cpu: 2
memory: 1024Mi
config:
logLevel: info
logFormat: json
route:
routes:
- match: []
receiver: kafka
receivers:
- name: "kafka"
kafka:
version: "3.7.2"
clientId: "kubernetes-event"
topic: "kube-event"
brokers:
- "172.139.20.17:9093"
- "172.139.20.81:9093"
- "172.139.20.177:9093"
compressionCodec: "snappy"
tls:
enable: true
certFile: "/data/pki/kafka-client.crt"
keyFile: "/data/pki/kafka-client.key"
caFile: "/data/pki/kafka-ca.crt"
# sasl: (commented out)
extraVolumes:
- name: kafka-ssl
secret:
secretName: kafka-ssl-secret
extraVolumeMounts:
- mountPath: /data/pki/
name: kafka-ssl</code>Install Event Exporter
<code>helm -n obs-system install kubernetes-event-exporter -f kubernetes-event-exporter-values.yaml kubernetes-event-exporter</code>Deploy Logstash to Consume Events
Logstash reads the events from Kafka and forwards them to Elasticsearch.
<code># k8s-envents.conf
input {
kafka {
bootstrap_servers => "172.139.20.17:9095,172.139.20.81:9095,172.139.20.177:9095"
topics => ["kube-event"]
group_id => "kube-event"
security_protocol => "SASL_SSL"
sasl_mechanism => "SCRAM-SHA-512"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='admin' password='admin-password';"
ssl_truststore_location => "/usr/share/logstash/certs/kafka/kafka.server.truststore.p12"
ssl_truststore_password => "truststore_password"
ssl_truststore_type => "PKCS12"
}
}
output {
elasticsearch {
hosts => ["https://elasticsearch.obs-system.svc:9200"]
ilm_enabled => true
ilm_rollover_alias => "kube-event"
ilm_pattern => "{now/d}-000001"
ilm_policy => "jiaxzeng"
manage_template => false
template_name => "kube-event"
user => "elastic"
password => "admin@123"
ssl => true
ssl_certificate_verification => true
truststore => "/usr/share/logstash/certs/es/http.p12"
truststore_password => "http.p12"
}
}</code>Create Elasticsearch Index Template
<code>PUT _index_template/kube-event
{
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "jiaxzeng",
"rollover_alias": "kube-event"
},
"number_of_shards": "3",
"number_of_replicas": "1"
}
}
},
"index_patterns": ["kube-event*"]
}</code>Upgrade Logstash
<code>helm -n obs-system upgrade logstash -f logstash-values.yaml logstash</code>Verification
Check Kafka offsets and query Elasticsearch to ensure events are stored. If data appears, the pipeline is functional.
<code>bin/kafka-get-offsets.sh --bootstrap-server 172.139.20.17:9092 --topic kube-event
kube-event:0:5
kube-event:1:5
kube-event:2:4</code>Conclusion
By integrating Kubernetes events with Elasticsearch through Kafka and Logstash, operators gain persistent observability, enabling fault traceback, trend prediction, and automated remediation based on event frequency.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.