Cloud Native 6 min read

Mastering Kubernetes Metrics Server: Deploy, Configure, and Optimize

This guide explains what the Kubernetes Metrics Server is, its key use cases such as autoscaling, health checks, and cost optimization, and provides step‑by‑step Helm commands to install, configure, and verify it in a cloud‑native cluster.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
Mastering Kubernetes Metrics Server: Deploy, Configure, and Optimize

In today’s fast‑evolving cloud‑native landscape, Kubernetes has become the de‑facto standard for container orchestration, and managing application performance is critical.

Metrics Server is the built‑in, scalable source of container resource metrics that powers the automatic scaling pipeline in Kubernetes.

It collects CPU and memory usage from each Kubelet, exposes them via the Metrics API in the apiserver, and enables tools such as the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA). The API also supports

kubectl top

for quick debugging.

Tip: Metrics Server is intended solely for autoscaling; do not use it as a general‑purpose monitoring source. For monitoring, query the Kubelet endpoints directly.

Use Cases

Automatic scaling: Works with HPA to adjust replica counts based on real‑time load.

Health checks: Periodically verifies resource consumption of critical services to spot potential issues.

Cost optimization: Analyzes long‑running workloads to identify over‑provisioned resources and reduce expenses.

Deploying Metrics Server

1. Add the Helm repository:

<code>$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
"metrics-server" has been added to your repositories</code>

2. Pull the chart and push it to a private registry:

<code>$ helm pull metrics-server/metrics-server --version 3.12.1

$ helm push metrics-server-3.12.1.tgz oci://core.jiaxzeng.com/plugins
Pushed: core.jiaxzeng.com/plugins/metrics-server:3.12.1
Digest: sha256:1d3328b3dc37ad8540bf67b6c92cbaa8c80eed3cafdb88e1fe7c4f5a1df334fe</code>

3. Pull the chart from the registry:

<code>$ sudo helm pull oci://core.jiaxzeng.com/plugins/metrics-server --version 3.12.1 --untar --untardir /etc/kubernetes/addons/
Pulled: core.jiaxzeng.com/plugins/metrics-server:3.12.1
Digest: sha256:1d3328b3dc37ad8540bf67b6c92cbaa8c80eed3cafdb88e1fe7c4f5a1df334fe</code>

4. Create the configuration file:

<code>$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/metrics-server-value.yaml > /dev/null
image:
  repository: core.jiaxzeng.com/library/metrics-server

args:
  - --kubelet-insecure-tls
EOF</code>

5. Install the server:

<code>$ helm -n kube-system install metrics-server -f /etc/kubernetes/addons/metrics-server-value.yaml /etc/kubernetes/addons/metrics-server
NAME: metrics-server
LAST DEPLOYED: Mon Oct 7 19:42:10 2024
NAMESPACE: kube-system
STATUS: deployed
...</code>

6. Verify the deployment:

<code>$ kubectl get pod -n kube-system -l app.kubernetes.io/name=metrics-server
NAME                     READY   STATUS    RESTARTS   AGE
metrics-server-xxxxx    1/1     Running   0          2m12s

$ kubectl top nodes
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01 294m        14%    1935Mi          50%
...</code>

References

https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server

https://github.com/kubernetes-sigs/metrics-server

By following these steps you can leverage Metrics Server to enhance monitoring, improve stability, and reduce operational costs in your Kubernetes clusters.

Cloud NativekubernetesHelmHorizontal Pod Autoscalermetrics-server
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.