How to Deploy OpenTelemetry, Grafana Tempo, and Jaeger with Docker Compose for End-to-End Tracing
This guide walks you through setting up a complete tracing pipeline using OpenTelemetry, Grafana Tempo, and Jaeger with Docker‑Compose, covering Tempo installation, collector configuration, sample application deployment, and Grafana UI integration to visualize traces, including code snippets and step‑by‑step commands.
Tempo Overview
Grafana Tempo is an open‑source distributed tracing backend that stores and queries trace data at low cost. It is compatible with Jaeger, Zipkin and OpenTelemetry protocols, so existing instrumentation does not need changes.
Official docs: https://grafana.com/docs/tempo/latest/
Deployment
Docker‑Compose definition
Create a docker-compose.yaml that runs Grafana, Tempo, an OpenTelemetry Collector and a Jaeger Hot‑Rod example on a common trace network.
services:
grafana:
container_name: grafana
image: grafana/grafana-oss:12.2.0-17142428006
restart: always
ports:
- "3000:3000"
volumes:
- ./grafana/data:/var/lib/grafana
environment:
GF_SERVER_ROOT_URL: http://localhost:3000/
networks:
- trace
tempo:
image: grafana/tempo:r224-3f5070b
container_name: tempo
restart: unless-stopped
ports:
- "3200:3200"
volumes:
- ./tempo/tempo-config.yml:/etc/tempo/config.yml
- ./tempo/tempo-data:/tmp/tempo
command: ["-config.file=/etc/tempo/config.yml"]
networks:
- trace
otel-collector:
image: otel/opentelemetry-collector-contrib:0.136.0
container_name: otel-collector
restart: always
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
volumes:
- ./otel/otel-collector-config.yml:/etc/otelcol-contrib/config.yaml
networks:
- trace
jaeger-example-hotrod:
image: jaegertracing/example-hotrod:1.72.0
container_name: jaeger-example-hotrod
restart: always
command: ["all", "--otel-exporter=otlp"]
ports:
- "18080-18083:8080-8083"
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
depends_on:
- otel-collector
networks:
- trace
networks:
trace:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"Tempo configuration (tempo-config.yml)
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
grpc:
endpoint: 0.0.0.0:4317
storage:
trace:
backend: local
local:
path: /tmp/tempo
block:
retention: 72hOpenTelemetry Collector configuration (otel-collector-config.yml)
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
processors:
resource:
attributes:
- key: env
value: "dev"
action: upsert
exporters:
otlphttp:
endpoint: "http://tempo:4318"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [resource]
exporters: [otlphttp]Running the stack
Start all services in detached mode:
docker compose up -dGrafana is reachable at http://localhost:3000 (default credentials admin/admin). Add a new data source of type Tempo with URL http://tempo:3200. Save and test the connection.
Observing traces
The Jaeger Hot‑Rod example generates trace data that is sent to the collector, forwarded to Tempo, and visualised in Grafana’s “Drill‑down” view. Open the Hot‑Rod UI at http://localhost:18080, interact with the sample endpoints, then switch to Grafana → Explore → Tempo to query traces by service name or trace ID.
Notes
Tempo uses local storage in this example; for production replace backend: local with an object store such as S3 or MinIO.
The collector disables TLS ( insecure: true) for simplicity; enable proper certificates in a secure environment.
All containers share the trace Docker bridge network, ensuring name‑based service discovery (e.g., tempo, otel-collector).
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
