Step‑by‑Step Guide to Install, Configure, and Use Grafana Mimir for Scalable Prometheus Monitoring
This tutorial walks through both command‑line and Docker‑Compose installations of Grafana Mimir, shows how to configure Prometheus remote‑write, set up Grafana data sources, create recording and alerting rules, and explains key Mimir features such as multi‑tenant support, hash rings, object storage, HA tracking and retention policies.
Overview
Grafana Mimir is an open‑source, horizontally scalable, highly available, multi‑tenant long‑term storage solution for Prometheus and OpenTelemetry metrics.
Installation Methods
Command‑line (imperative) – start a single Mimir process per command.
Declarative (Docker‑Compose) – launch a full Mimir cluster with load‑balancing.
Command‑line Installation
Download the latest binary (v2.17) and start a single instance: docker pull grafana/mimir:latest Or download the binary directly:
curl -fLo mimir https://github.com/grafana/mimir/releases/latest/download/mimir-linux-amd64
chmod +x mimirCreate a minimal demo.yaml configuration (single‑node, no load balancer):
# Example configuration – not for production!
multitenancy_enabled: false
blocks_storage:
backend: filesystem
filesystem:
dir: /opt/mimir/data/tsdb
tsdb:
dir: /opt/mimir/tsdb
server:
http_listen_port: 9009
log_level: errorSet the working directory and start Mimir:
export DEPLOY_PATH=/your/workdir
./mimir --config.file=./demo.yamlMimir will listen on port 9009.
Declarative Installation (Docker‑Compose)
Clone the repository and start the stack:
git clone https://github.com/grafana/mimir.git
cd mimir/docs/sources/mimir/get-started/play-with-grafana-mimir/
docker compose up -dThe compose file launches three Mimir instances (high‑availability), a MinIO S3‑compatible object store, and a Prometheus scraper. Instances communicate via a memberlist cluster and are exposed through an Nginx load balancer on port 9009. Grafana is available at http://localhost:3000 (default admin/admin).
Configuration
Prometheus Remote‑Write
Add a remote‑write target pointing to Mimir:
remote_write:
- url: http://localhost:9009/api/v1/pushFor container deployments use the service name:
remote_write:
- url: http://mimir:9009/api/v1/pushReload Prometheus (or enable --web.enable-lifecycle for hot‑reload):
curl -X POST http://localhost:9090/-/reloadGrafana Data Source
In Grafana UI ( http://localhost:3000) add a new Prometheus data source pointing to the Mimir endpoint:
Binary mode: http://localhost:9009/prometheus Docker mode: http://mimir:9009/prometheus Give the data source a custom name (e.g., mimir) and save.
Recording Rules
Create a data‑source‑managed recording rule, e.g., sum:up, which aggregates the up metric across all Mimir instances every minute.
Alerting Rules
Example alert fires when the number of running Mimir instances drops below three:
Expression: count(up==0)
For: 30s
Labels:
severity: critical
Annotations:
summary: "Mimir instance down"Configure a contact point (e.g., email) in Grafana Alerting. Note that email notifications require an external SMTP server.
Mimir Features
Configuration
Configuration can be supplied via a YAML file or CLI flags. Precedence (later overrides earlier):
YAML common configuration
YAML component‑specific configuration
CLI common flags
CLI component‑specific flags
Runtime configuration files can be used for per‑tenant overrides (e.g., different ingestion rates) with -runtime-config.file=<path>. The configuration can be queried via /config or /runtime_config HTTP endpoints.
Multi‑Tenant Support
Tenant IDs are strings up to 150 characters containing alphanumerics and the symbols ! - _ . * '. Reserved IDs such as ., .., and __mimir_cluster are prohibited.
Anonymous Usage Statistics
Usage stats are enabled by default and can be disabled with:
usage_stats:
enabled: falseHash Rings
Components (ingesters, distributors, compactors, store‑gateways, etc.) use a consistent‑hash ring stored in a key‑value backend (memberlist, Consul, etcd, or multi‑store). The ring ensures even data distribution and leader election for HA tracking.
Object Storage
Mimir stores blocks, recording rules, and alertmanager state in an external object store (S3, GCS, Azure Blob, Swift). Example S3 configuration:
common:
storage:
backend: s3
s3:
endpoint: s3.us-east-2.amazonaws.com
region: us-east-2
access_key_id: "${AWS_ACCESS_KEY_ID}"
secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
blocks_storage:
s3:
bucket_name: mimir-blocks
alertmanager_storage:
s3:
bucket_name: mimir-alertmanager
ruler_storage:
s3:
bucket_name: mimir-rulerBucket names for blocks, alertmanager, and ruler must be unique.
Distributed HA Tracker
When multiple Prometheus instances scrape the same targets, the distributor can deduplicate samples using an HA tracker. Each Prometheus instance must expose two global labels: cluster (e.g., prom-team1) and __replica__ (e.g., replica1). The distributor stores leader information in a KV store (e.g., Consul) and elects a leader per cluster. If the leader stops sending data, a new leader is elected after a timeout.
Retention Period
By default Mimir retains data forever. A retention period can be set, for example:
limits:
compactor_blocks_retention_period: 1yDeployment Modes
Mimir can run in:
Monolithic – all components run in one binary.
Micro‑service – each component runs as a separate process, allowing independent scaling.
The tutorial uses the monolithic mode for simplicity.
Data Flow
Write path:
Prometheus → Remote‑write → Distributor → Ingester → Object store.
Read path:
Querier → Ingester (recent data) + Store‑gateway (historical blocks) → Returns to Prometheus/Grafana.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
