Operations 7 min read

How to Auto‑Scale Non‑CPU Apps with cAdvisor Network Metrics in Kubernetes

This guide explains how to use cAdvisor‑provided container network traffic counters as custom metrics for Kubernetes HPA, covering metric collection, Prometheus‑adapter configuration, verification, and a complete HPA testing workflow for elastic scaling of non‑CPU‑intensive workloads.

Raymond Ops
Raymond Ops
Raymond Ops
How to Auto‑Scale Non‑CPU Apps with cAdvisor Network Metrics in Kubernetes

Abstract: The workload is not CPU or memory sensitive, and we want to drive Horizontal Pod Autoscaler (HPA) based on traffic metrics, but most applications lack Prometheus SDK instrumentation. cAdvisor’s container network traffic counters enable elastic scaling during traffic peaks and valleys.

Background

The workload is not CPU or memory sensitive, and we want to drive HPA based on traffic metrics, but most applications lack Prometheus SDK instrumentation. cAdvisor’s container network traffic counters can achieve elastic scaling during peak and trough periods.

Solution Overview

cAdvisor collects resource statistics for containers and the node, is built into kubelet, and exposes them via the

/metrics/cadvisor

endpoint. It provides the cumulative network receive and transmit byte counters:

container_network_receive_bytes_total

– total bytes received by the container.

container_network_transmit_bytes_total

– total bytes transmitted by the container.

Both metrics are of type

counter

, so their values only increase. When converting to a rate‑based custom metric, the counters must be divided by time (e.g., per second) and optionally by 1000 to express kilobytes.

Practical Steps

3.1 Install Prometheus‑related plugins

Use Huawei Cloud CCE and install the

kube-prometheus-stack

from the plugin market, which already integrates node‑level cAdvisor metrics.

3.2 Configure Prometheus‑adapter metric conversion rules

Edit the adapter configmap:

kubectl -n monitoring edit configmap user-adapter-config
<code>- seriesQuery: 'container_network_receive_bytes_total{namespace!="",pod!=""}'
  seriesFilters: []
  resources:
    overrides:
      namespace:
        resource: namespace
      pod:
        resource: pod
  name:
    matches: container_(.*)_total
    as: "pod_${1}_per_second"
  metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[3m])) by (<<.GroupBy>>)/1000

- seriesQuery: 'container_network_transmit_bytes_total{namespace!="",pod!=""}'
  seriesFilters: []
  resources:
    overrides:
      namespace:
        resource: namespace
      pod:
        resource: pod
  name:
    matches: container_(.*)_total
    as: "pod_${1}_per_second"
  metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[3m])) by (<<.GroupBy>>)/1000
</code>

After editing, restart the

custom-metrics-apiserver

in the monitoring namespace.

The

metricsQuery

computes the per‑second rate over the last three minutes. Because the source counters only increase, the rate conversion is required. Dividing by 1000 converts bytes to kilobytes.

The

resources

section maps Prometheus metrics to Kubernetes objects, and the

name

section renames the metric for readability.

3.3 Verify the custom metric

Query the metric via the custom metrics API:

kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/pod_network_receive_bytes_per_second" | jq

The returned JSON shows the metric name and its current value. The same metric can be viewed in the CCE console under custom metrics.

3.4 Test HPA scaling

Create an HPA manifest that uses the custom metric:

<code>apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-app07
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app07
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metric:
        name: pod_network_receive_bytes_per_second
      target:
        type: AverageValue
        averageValue: 10
</code>

Generate traffic with a simple loop:

while true; do curl clusterIP:port; done

Observe the HPA in real time:

kubectl get hpa hpa-app07 -w

As the network‑receive‑bytes‑per‑second metric rises, the number of pod replicas increases until the maximum is reached. When the load stops, the HPA scales the deployment back down to a single replica.

Supplementary Information

How to view container network traffic metrics in the CCE console.

Load‑level network traffic metric display.

Pod‑level network traffic metric display.

Use the Cloud‑Native Monitoring dashboard → Pod view to compare custom metric calculations with built‑in metrics.

kubernetesPrometheusScalingHPACustom MetricscAdvisor
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.