Master Process Exporter: Deploy, Integrate with Prometheus & Grafana in Kubernetes
This guide walks Kubernetes administrators through the full lifecycle of Process Exporter—from lightweight deployment and RBAC setup, through Prometheus Operator integration and Grafana dashboard creation, to detailed configuration and alerting—enabling precise process‑level monitoring and rapid root‑cause analysis.
Introduction
As a Kubernetes administrator you may face invisible CPU spikes, abnormal container processes, or lack a complete process‑level monitoring system. This article guides you through the entire Process Exporter workflow, covering basic deployment, Prometheus integration, Grafana visualization, and alert rule configuration, suitable for beginners.
1. Getting Started with Process Exporter
1.1 What is Process Exporter?
Official : Prometheus ecosystem standard exporter
Lightweight : 15 MB image, supports container and host process monitoring
Core capabilities
Process CPU/Memory usage
File descriptor count
Thread count and runtime
Regex‑based process filtering
1.2 Why use it?
Process Exporter provides process‑level metrics (granularity to PID) compared with Node Exporter’s node‑level metrics. It captures CPU, memory, thread, and file‑descriptor data, making it ideal for pinpointing abnormal processes, monitoring Java GC behavior, and analyzing MySQL connection‑pool exhaustion.
1.3 Deployment controller selection
Full node coverage : ensure every Worker node runs a monitoring instance
Hybrid monitoring : collect both container processes and host services such as kubelet and sshd
Resource‑usage optimization : avoid resource waste caused by multiple Deployment replicas
2. Quick Deployment of Process Exporter
2.1 Architecture diagram
2.2 Deployment YAML template
<code># 1. Create RBAC permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: process-exporter
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get", "list", "watch"]
---
# 2. Configure DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: process-exporter
namespace: monitoring
spec:
selector:
matchLabels:
app: process-exporter
template:
metadata:
labels:
app: process-exporter
spec:
hostPID: true # share host PID namespace
hostNetwork: true # optional: share host network namespace
tolerations:
- operator: Exists # tolerate all taints
containers:
- name: process-exporter
image: prometheus/process-exporter:v1.7.0
args:
- "-procfs=/host/proc" # host /proc path
- "-config.path=/etc/process-exporter/config.yaml"
volumeMounts:
- name: config-volume
mountPath: /etc/process-exporter/config.yaml
subPath: config.yaml
- name: proc
mountPath: /host/proc
readOnly: true
ports:
- containerPort: 9256
resources:
limits:
cpu: "200m"
memory: "256Mi"
securityContext:
capabilities:
add:
- SYS_PTRACE # allow process tracing
- SYS_ADMIN # optional: host resource access
volumes:
- name: config-volume
configMap:
name: process-exporter-config
items:
- key: config.yaml
path: config.yaml
- name: proc
hostPath:
path: /proc
---
apiVersion: v1
kind: Service
metadata:
name: process-exporter
namespace: monitoring
labels:
app: process-exporter
spec:
ports:
- port: 9256
targetPort: 9256
protocol: TCP
name: http
selector:
app: process-exporter</code>2.3 ConfigMap configuration
<code>apiVersion: v1
kind: ConfigMap
metadata:
name: process-exporter-config
namespace: monitoring
data:
config.yaml: |
process_names:
- name: "{{.Comm}}"
cmdline:
- '.+'</code>2.4 Verify deployment
<code># Check Pod status
kubectl get pods -n monitoring -l app=process-exporter
# Test data collection
kubectl exec -it <pod-name> -- curl http://localhost:9103/metrics | grep java_process_cpu_seconds_total</code>3. Integration with Prometheus Monitoring System
3.1 Automatic ingestion via Prometheus Operator
Step 1: Create a ServiceMonitor.
<code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: process-exporter
namespace: monitoring
spec:
endpoints:
- port: http
interval: 15s
path: /metrics
relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node # auto‑add node label
namespaceSelector:
matchNames:
- monitoring
selector:
matchLabels:
app: process-exporter</code>Step 2: The Operator automatically creates TargetGroups, registers them with the Prometheus server, and generates Recording Rules.
3.2 Grafana dashboards and metrics
Key metrics are prefixed with
namedprocess_namegroup:
namedprocess_namegroup_cpu_seconds_total: CPU time (user vs. system)
namedprocess_namegroup_memory_bytes: Memory usage (different memtype)
namedprocess_namegroup_num_threads: Thread count
namedprocess_namegroup_open_filedesc: Open file descriptors
namedprocess_namegroup_read_bytes_total: Bytes read by the process
namedprocess_namegroup_thread_context_switches_total: Thread context switches
CPU metrics are derived from /proc/pid/stat fields (utime, stime, cutime, cstime). Example PromQL for per‑core CPU usage:
<code>increase(namedprocess_namegroup_cpu_seconds_total{mode="user",groupname="procname"}[30s])*100/30
increase(namedprocess_namegroup_cpu_seconds_total{mode="system",groupname="procname"}[30s])*100/30</code>Memory is split into five types: resident, proportionalResident, swapped, proportionalSwapped, and virtual. For most applications, resident and virtual are the most relevant.
Process Exporter provides a ready‑to‑import Grafana dashboard:
https://grafana.com/grafana/dashboards/249-named-processes/
4. Configuration Details
Process Exporter configuration consists of exporter parameters and process‑selection rules.
config/config.path : path to the configuration file
web.listen-address : listening address (default port used by Prometheus)
web.telemetry-path : metrics endpoint, usually
/metricsAdditional specific options include children , namemapping , procfs , procnames , and threads .
Process filtering uses the
process_namesarray. Each entry defines a group with a name (which can use templates such as
{{.Comm}},
{{.ExeBase}},
{{.ExeFull}},
{{.Username}},
{{.PID}},
{{.StartTime}},
{{.Cgroups}}) and a
cmdlineregex. Multiple fields can be combined:
command
exeare OR arrays, while
cmdlineis an AND array of regexes.
<code>process_names:
- comm:
- bash
- exe:
- postgres
- /usr/local/bin/prometheus
- name: "{{.ExeFull}}:{{.Matches.Cfgfile}}"
exe:
- /usr/local/bin/process-exporter
cmdline:
- -config.path\s+(?P<Cfgfile>\S+)</code> <code># Monitor NVIDIA GPU processes
filter:
- name: gpu-process
pattern: "^nvidia-smi"
env: ["NVIDIA_VISIBLE_DEVICES=all"]</code>5. Conclusion
Deploying Process Exporter as a DaemonSet, together with the Prometheus Operator and Grafana dashboards, builds a comprehensive monitoring system that covers container processes, host services, and hardware resources.
Phase rollout : gradually move from testing to production.
Define monitoring SLA : set threshold values for different process levels.
Regular drills : simulate process anomalies to verify alert effectiveness.
Further reading:
Official documentation: https://process_exporter.readthedocs.io
Kubernetes monitoring whitepaper: https://example.com/k8s-monitoring-whitepaper
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.