Cloud Native 8 min read

Master Alertmanager with kube‑prometheus: Step‑by‑Step Deployment & Email Alerts

This guide walks you through installing Alertmanager via the kube‑prometheus‑stack Helm chart, configuring SMTP proxy and email notifications, customizing alert templates, and upgrading the chart so you can achieve reliable, automated alerting for your Kubernetes clusters.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
Master Alertmanager with kube‑prometheus: Step‑by‑Step Deployment & Email Alerts

Why Alertmanager?

Prometheus collects metrics and Grafana visualizes them, but without Alertmanager you may miss critical alerts until services have already failed. Alertmanager acts as the brain of the Prometheus alerting ecosystem, enabling intelligent routing and management of alerts.

Install Alertmanager

Using the kube-prometheus-stack Helm chart, enable the Alertmanager component and customize its deployment:

alertmanager:
  enabled: true
  alertmanagerSpec:
    image:
      registry: core.jiaxzeng.com
      repository: obs/kube-prometheus-stack/alertmanager
      tag: v0.26.0
    externalUrl: https://ops.jiaxzeng.com/alertmanager
    routePrefix: /alertmanager
    storage:
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          storageClassName: longhorn
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    ingress:
      enabled: true
      ingressClassName: nginx
      annotations:
        cert-manager.io/cluster-issuer: ca-cluster-issuer
      hosts:
        - ops.jiaxzeng.com
      paths:
        - /alertmanager
      tls:
        - secretName: ops.jiaxzeng.com-tls
          hosts:
            - ops.jiaxzeng.com

Upgrade the Helm release to apply the changes:

$ helm -n obs-system upgrade monitor -f /etc/kubernetes/addons/kube-prometheus-stack-values.yaml /etc/kubernetes/addons/kube-prometheus-stack
Release "monitor" has been upgraded. Happy Helming!

Configure Email Alerts

Because production clusters often lack direct internet access, deploy a simple SMTP proxy using socat:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smtp-proxy-network
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      app: socat
  template:
    metadata:
      labels:
        app: socat
    spec:
      containers:
      - name: tools
        image: core.jiaxzeng.com/library/tools:v1.3
        imagePullPolicy: IfNotPresent
        args:
        - TCP-LISTEN:1025,fork
        - PROXY:172.139.20.170:smtp.126.com:25,proxyport=3888
        command:
        - socat

Update the Alertmanager configuration to use the proxy and define email routing:

alertmanager:
  config:
    global:
      resolve_timeout: 5m
      smtp_from: '[email protected]'
      smtp_smarthost: 'smtp-proxy-network.kube-system.svc:1025'
      smtp_require_tls: false
      smtp_auth_username: '[email protected]'
      smtp_auth_password: 'xxxx'
    route:
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      group_by: ['alertname']
      receiver: 'email'
      routes:
      - matchers:
        - alertname =~ "InfoInhibitor|Watchdog"
        receiver: 'null'
    receivers:
    - name: 'null'
    - name: 'email'
      email_configs:
      - to: "[email protected]"
        send_resolved: true
        templates: ['/etc/alertmanager/config/*.tmpl']

Customize Email Templates

Add a template file to format alert details in HTML:

alertmanager:
  config:
    receivers:
    - name: 'email'
      email_configs:
      - to: "[email protected]"
        html: '{{ template "email.to.html" . }}'
        send_resolved: true
        templateFiles:
          email.tmpl: |- 
            {{ define "email.to.html" }}
            {{- if gt (len .Alerts.Firing) 0 -}}
            {{ range .Alerts }}
            =========start==========<br>
            告警程序: prometheus_alert <br>
            告警级别: {{ .Labels.severity }} <br>
            告警类型: {{ .Labels.alertname }} <br>
            告警主机: {{ .Labels.instance }} <br>
            告警主题: {{ .Annotations.summary }} <br>
            告警详情: {{ .Annotations.description }} <br>
            触发时间: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }} <br>
            =========end==========<br>
            {{ end }}{{ end -}}
            {{- if gt (len .Alerts.Resolved) 0 -}}
            {{ range .Alerts }}
            =========start==========<br>
            告警程序: prometheus_alert <br>
            告警级别: {{ .Labels.severity }} <br>
            告警类型: {{ .Labels.alertname }} <br>
            告警主机: {{ .Labels.instance }} <br>
            告警主题: {{ .Annotations.summary }} <br>
            告警详情: {{ .Annotations.description }} <br>
            触发时间: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }} <br>
            恢复时间: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }} <br>
            =========end==========<br>
            {{ end }}{{ end -}}
            {{- end }}

Upgrade the Helm release again to apply the template changes.

Conclusion

After following this tutorial, you should be able to deploy Alertmanager with the kube‑prometheus‑stack, configure SMTP proxying, set up email notifications, and customize alert templates, giving you a complete end‑to‑end monitoring and alerting solution for Kubernetes.

cloud-nativeKubernetesPrometheusAlertmanagerHelmemail alerts
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.