How to Deploy Logstash on Kubernetes for Real‑Time Container Log Collection
This guide walks you through installing Logstash on Kubernetes using Helm, configuring secure Kafka and Elasticsearch connections, setting resource limits, and verifying the deployment, enabling efficient real‑time collection, filtering, and indexing of container logs in a cloud‑native environment.
In the cloud‑native era, Kubernetes and Docker make container deployment routine, but managing the massive, dynamic logs they generate is a major challenge for operations teams.
The ELK stack (Elasticsearch + Logstash + Kibana) provides a powerful solution, with Logstash acting as the pipeline that collects, cleans, transforms, and forwards container logs.
Step 1 – Download the Logstash Helm chart
<code>$ helm repo add elastic https://helm.elastic.co --force-update
$ helm pull elastic/logstash --version 7.13.3</code>Step 2 – Push the chart to a private repository
<code>$ helm push logstash-7.13.3.tgz oci://core.jiaxzeng.com/plugins</code>Step 3 – Configure the Logstash deployment (values.yaml)
<code>fullnameOverride: logstash
replicas: 3
image: "core.jiaxzeng.com/library/logstash"
logstashJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "100m"
memory: "1536Mi"
limits:
cpu: "1000m"
memory: "1536Mi"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
httpPort: 9600
logstashPipeline:
logstash.conf: |
input {
kafka {
bootstrap_servers => "xxx:9095,xxx:9095,xxx:9095"
topics => ["k8s_logs"]
group_id => "k8s_logs"
security_protocol => "SASL_SSL"
sasl_mechanism => "SCRAM-SHA-512"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='admin' password='admin-password';"
ssl_truststore_location => "/usr/share/logstash/certs/kafka/kafka.server.truststore.p12"
ssl_truststore_password => "truststore_password"
ssl_truststore_type => "PKCS12"
}
}
filter {
json { source => "message" }
mutate { remove_field => ["container","agent","log","input","ecs","host","@version","fields","@metadata"] }
}
output {
elasticsearch {
hosts => ["https://elasticsearch.obs-system.svc:9200"]
user => "elastic"
password => "admin@123"
ssl => true
ssl_certificate_verification => true
truststore => "/usr/share/logstash/certs/es/http.p12"
truststore_password => "http.p12"
ilm_enabled => true
ilm_rollover_alias => "k8s-logs"
ilm_pattern => "{now/d}-000001"
ilm_policy => "jiaxzeng"
manage_template => false
template_name => "k8s-logs"
}
}
extraEnvs:
- name: XPACK_MONITORING.ENABLED
value: "false"
secretMounts:
- name: kafka-ssl
secretName: kafka-ssl-secret
path: /usr/share/logstash/certs/kafka
- name: es-ssl
secretName: elastic-certificates
path: /usr/share/logstash/certs/es</code>Step 4 – Deploy Logstash
<code>$ helm -n obs-system install logstash -f logstash-values.yaml logstash</code>Step 5 – Verify the deployment
<code>$ kubectl -n obs-system get pod -l app=logstash
# Expected output: three running Logstash pods
$ kubectl -n obs-system get pod -l app=logstash -o wide
# Check that indices are created in Elasticsearch (see Kibana)</code>Tip: Ensure ILM policies and index templates are created in Elasticsearch before deploying Logstash.
By centralizing container logs with Logstash, you gain real‑time observability, cost‑effective cold‑data archiving, and intelligent alerting based on log levels.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.