How to Deploy Loki for Cloud‑Native Log Management with Promtail and Grafana
This guide explains Loki's lightweight cloud‑native logging architecture, shows step‑by‑step configuration of Promtail, Loki service, and Grafana integration, and provides concrete YAML and systemd examples for collecting and visualizing secure logs.
Overview
Loki is a cloud‑native, lightweight log aggregation system that integrates tightly with Prometheus and Grafana. It offers a cost‑effective alternative to traditional stacks such as ELK/EFK, Graylog, or Splunk.
Architecture
Collection layer : promtail runs on each node, discovers log sources, rewrites labels, filters logs and forwards them to Loki via the push API.
Core processing layer : Loki receives logs, builds indexes and stores data. Storage is pluggable; for testing use local directories /var/lib/loki/chunks and /var/lib/loki/index, while production deployments typically use object stores such as S3, GCS or MinIO.
Visualization layer : Grafana connects to Loki as a data source and provides dashboards, time‑series charts, raw log tables and alert panels.
Promtail configuration
server:
http_listen_port: 9085
grpc_listen_port: 0
positions:
filename: /opt/promtail/positions.yaml
clients:
- url: http://192.168.202.41:3100/loki/api/v1/push
scrape_configs:
- job_name: system_secure
static_configs:
- targets: []
labels:
LogType: secure
__path__: /var/log/secureSystemd unit (save as /etc/systemd/system/promtail.service)
[Unit]
Description=promtail
After=network-online.target
[Service]
Environment="OPTIONS=--config.file=/opt/promtail/config.yml"
ExecStart=/opt/promtail/promtail $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
[Install]
WantedBy=multi-user.targetEnable and start:
systemctl enable promtail --now
systemctl start promtailLoki core configuration
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 0.0.0.0
path_prefix: /app/tool/loki/loki_data
storage:
filesystem:
chunks_directory: /app/tool/loki/loki_data/chunks
rules_directory: /app/tool/loki/loki_data/rules
replication_factor: 1
store: inmemory
limits_config:
retention_period: 720h
max_query_lookback: 720h
ingestion_rate_mb: 50
ingestion_burst_size_mb: 100
compactor:
working_directory: /app/tool/loki/loki_data/retention
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
delete_request_store: filesystem
schema_config:
configs:
- from: 2025-10-25
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: https://<em>alertmanager-host</em>:9093
analytics:
reporting_enabled: falseSystemd unit for Loki (save as /etc/systemd/system/loki.service)
[Unit]
Description=loki
After=network-online.target
[Service]
Environment="OPTIONS=--config.file=/opt/loki/config.yml --expand-env=true"
ExecStart=/opt/loki/loki $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
[Install]
WantedBy=multi-user.targetEnable and start:
systemctl enable loki --now
systemctl start lokiGrafana integration
Add Loki as a data source (URL http://<em>loki-host</em>:3100).
Use the Explorer with LogQL queries, e.g. {LogType="secure"}, to retrieve and filter logs.
Create dashboards that visualize login time distribution, source IP geography, user statistics and success/failure ratios.
Key considerations
Promtail stores its read offsets in positions.yaml to avoid duplicate ingestion after restarts.
Loki does not ship its own storage engine; choose a backend that matches scale and durability requirements.
Retention and ingestion limits should be tuned to the expected log volume (e.g., 50 MiB/s ingestion rate, 30‑day retention).
Compactor settings control how often chunks are compressed and old data deleted.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
