How to Visualize Kubernetes Namespace Resource Usage with Prometheus
This guide walks you through deploying kube-state-metrics, configuring Prometheus to collect CPU, memory and other resource metrics per Kubernetes namespace, setting up ResourceQuota and LimitRange visualizations, and verifying data collection with Helm, Docker, and curl commands, enabling comprehensive cluster health monitoring.
Why resource monitoring matters in Kubernetes
In Kubernetes, CPU, memory, and storage are critical; a namespace exhausting resources can cause crashes.
Deploy kube-state-metrics service
Install the kube-state-metrics service, which provides the metrics needed for pod monitoring (cAdvisor metrics are available via kubelet).
Step 1: Download Helm chart and Docker image
# Download Chart
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts --force-update
helm pull prometheus-community/kube-state-metrics --version 5.26.0
# Download image
docker pull registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0Step 2: Push chart and image to Harbor
# Push Chart
helm push kube-state-metrics-5.26.0.tgz oci://core.jiaxzeng.com/plugins
# Push image
docker tag registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0 core.jiaxzeng.com/library/monitor/kube-state-metrics:v2.13.0
docker push core.jiaxzeng.com/library/monitor/kube-state-metrics:v2.13.0Step 3: Deploy the service
cat kube-state-metrics-values.yaml
fullnameOverride: kube-state-metrics
image:
registry: core.jiaxzeng.com
repository: library/monitor/kube-state-metrics
helm -n obs-system install kube-state-metrics -f kube-state-metrics-values.yaml kube-state-metricsPrometheus configuration for pod metrics
job_name: "k8s/cadvisor"
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
metrics_path: /metrics/cadvisor
relabel_configs:
- regex: __meta_kubernetes_node_label_(.+)
action: labelmap
job_name: kube-state-metrics
kubernetes_sd_configs:
- role: service
relabel_configs:
- action: keep
source_labels: [__meta_kubernetes_namespace,__meta_kubernetes_service_name,__meta_kubernetes_service_port_name]
regex: obs-system;kube-state-metrics;httpValidate collection
# Verify cAdvisor metrics
curl -s -u admin $(kubectl -n kube-system get svc prometheus -ojsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')/prometheus/api/v1/query --data-urlencode 'query=up{job=~"k8s/cadvisor"}' | jq '.data.result[] | {job: .metric.job, instance: .metric.instance ,status: .value[1]}'
# Verify kube-state-metrics
curl -s -u admin $(kubectl -n kube-system get svc prometheus -ojsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')/prometheus/api/v1/query --data-urlencode 'query=up{job=~"kube-state-metrics"}' | jq '.data.result[] | {job: .metric.job, instance: .metric.instance ,status: .value[1]}'Add monitoring dashboard
Import a Grafana dashboard to visualize the collected metrics.
Result screenshots
Conclusion
Accurate namespace‑level resource monitoring is essential for maintaining cluster stability and quickly responding to issues.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
