Why Choose Loki Over ELK? A Hands‑On Guide to Deploying and Using Grafana Loki
This article explains the motivations for selecting Grafana Loki instead of ELK/EFK, introduces its core concepts and features, provides step‑by‑step deployment instructions for Promtail and Loki, and demonstrates how to configure Grafana, query logs, and handle label indexing, dynamic tags, and high‑cardinality challenges.
1. Introduction
When designing a container‑cloud log solution, the author found ELK/EFK too heavy and chose Grafana Loki, an open‑source, horizontally scalable, multi‑tenant log aggregation system optimized for Prometheus and Kubernetes.
Project address: https://github.com/grafana/loki/
2. Features
Loki does not index full log content; it stores compressed logs and only indexes metadata.
It uses the same label‑based indexing as Prometheus, enabling efficient grouping and alertmanager integration.
Optimized for Kubernetes pod logs; pod labels are automatically indexed.
Native Grafana support eliminates the need to switch between Grafana and Kibana.
3. Deployment
3.1 Local installation
Download Promtail and Loki
<code>wget https://github.com/grafana/loki/releases/download/v2.2.1/loki-linux-amd64.zip
wget https://github.com/grafana/loki/releases/download/v2.2.1/promtail-linux-amd64.zip</code>Install Promtail
<code>$ mkdir /opt/app/{promtail,loki} -pv
# promtail configuration
$ cat <<EOF > /opt/app/promtail/promtail.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/log/positions.yaml
client:
url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
pipeline_stages:
static_configs:
- targets:
- localhost
labels:
job: varlogs
host: yourhost
__path__: /var/log/*.log
EOF
unzip promtail-linux-amd64.zip
mv promtail-linux-amd64 /opt/app/promtail/promtail
# systemd service
$ cat <<EOF > /etc/systemd/system/promtail.service
[Unit]
Description=promtail server
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/opt/app/promtail/promtail -config.file=/opt/app/promtail/promtail.yaml
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=promtail
[Install]
WantedBy=default.target
EOF
systemctl daemon-reload
systemctl restart promtail
systemctl status promtail</code>Install Loki
<code>$ mkdir /opt/app/{promtail,loki} -pv
# loki configuration
$ cat <<EOF > /opt/app/loki/loki.yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
ingester:
wal:
enabled: true
dir: /opt/app/loki/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h
max_chunk_age: 1h
chunk_target_size: 1048576
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /opt/app/loki/boltdb-shipper-active
cache_location: /opt/app/loki/boltdb-shipper-cache
cache_ttl: 24h
shared_store: filesystem
filesystem:
directory: /opt/app/loki/chunks
compactor:
working_directory: /opt/app/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /opt/app/loki/rules
rule_path: /opt/app/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
EOF
unzip loki-linux-amd64.zip
mv loki-linux-amd64 /opt/app/loki/loki
# systemd service
$ cat <<EOF > /etc/systemd/system/loki.service
[Unit]
Description=loki server
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/opt/app/loki/loki -config.file=/opt/app/loki/loki.yaml
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=loki
[Install]
WantedBy=default.target
EOF
systemctl daemon-reload
systemctl restart loki
systemctl status loki</code>4. Usage
4.1 Add Loki data source in Grafana
In Grafana, select Loki as the data source and set the URL to http://loki:3100, then save.
Switch to the Explore view to query logs.
Click “Log labels” to view available labels and filter logs, e.g., select /var/log/messages.
4.2 Query logs in Explore
<code>rate({job="message"} |= "kubelet")</code>4.3 Indexing only labels
Loki indexes only labels, not full log content, which reduces storage and speeds up queries.
4.5 Dynamic and high‑cardinality labels
Dynamic labels have non‑fixed values; high‑cardinality labels have many possible values (e.g., IP addresses), which can create a large number of streams and increase resource usage.
Example: extracting action and status_code from Apache access logs
<code>scrape_configs:
- job_name: system
pipeline_stages:
- regex:
expression: "^(?P<ip>\\S+) (?P<identd>\\S+) (?P<user>\\S+) \[(?P<timestamp>[\\w:/]+\\s[+\\-]\\d{4})\] \"(?P<action>\\S+)\\s?(?P<path>\\S+)?\\s?(?P<protocol>\\S+)?\" (?P<status_code>\\d{3}|-) (?P<size>\\d+|- )\""
- labels:
action:
status_code:
static_configs:
- targets:
- localhost
labels:
job: apache
env: dev
__path__: /var/log/apache.log</code>This creates separate streams for each combination of action and status_code.
High‑cardinality warning
Using a label like IP can generate thousands of streams, which may overwhelm Loki.
4.6 Full‑text indexing trade‑offs
Full‑text indexes are large and costly; Loki’s label‑only index is an order of magnitude smaller.
Query sharding
Loki splits queries into time‑based shards, opens the relevant chunks for matching streams, and processes them in parallel, allowing fast queries over massive log volumes.
Comparison with Elasticsearch
Elasticsearch maintains a large index continuously, consuming memory, whereas Loki builds temporary shards at query time.
Best practices
Keep the number of labels low for small log volumes.
Add labels only when needed for filtering.
Chunk size and max_chunk_age affect when to add new labels.
Logs must be time‑ordered; Loki rejects out‑of‑order data.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.