Helm vs Kustomize: Which Is the Best Practice for Managing Kubernetes Applications?
This guide compares Helm and Kustomize, detailing their design philosophies, key features, suitable scenarios, environment requirements, step‑by‑step installation and deployment procedures, best‑practice recommendations, common pitfalls, troubleshooting tips, CI/CD integration, and monitoring strategies to help teams choose the optimal Kubernetes application management tool.
Overview
In the Kubernetes ecosystem, managing application configuration is a major challenge, especially when dozens or hundreds of services need different settings across development, testing, and production environments. Helm and Kustomize are the two most popular tools, each based on a distinct design philosophy: Helm uses templating and package management, while Kustomize relies on declarative patches and overlays.
Technical Characteristics
Helm features :
Go Template engine with high flexibility for complex logic and variable substitution.
Chart package management similar to apt/yum, enabling versioned publishing and sharing.
Built‑in release management and rollback capabilities for application lifecycle.
Kustomize features :
Declarative configuration without templates; uses YAML patches to express differences while keeping original files readable.
Built into kubectl (1.14+), no extra installation required; gentle learning curve.
Base + Overlay layered design, naturally suited for multi‑environment configuration management.
Applicable Scenarios
Helm suitable scenarios :
Publishing reusable application packages such as databases or monitoring components.
Complex configuration requiring conditional logic, loops, or advanced templating.
Need for built‑in versioning, rollback, and upgrade capabilities.
Kustomize suitable scenarios :
Multi‑environment deployments (Dev/Staging/Prod) where each environment only has minor differences.
Teams that prefer native YAML readability and want to avoid template complexity.
GitOps workflows (ArgoCD, Flux) where configuration is treated as code.
Environment Requirements
Kubernetes version 1.20+ (Kustomize needs 1.14+, recommended 1.20+).
Helm 3.0+ (Tiller removed, architecture simplified).
Kustomize is built into kubectl 1.14+; a standalone binary can also be installed.
kubectl version should match the cluster version, preferably within one minor version.
Detailed Steps
Preparation
Install Helm
# Method 1: script (recommended)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Method 2: package manager
brew install helm
# Debian/Ubuntu
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
# Verify installation
helm version
# Add common chart repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo updateInstall Kustomize
# Kustomize is built into kubectl 1.14+
kubectl version --client
kubectl kustomize --help
# Install standalone version for latest features
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
# Verify installation
kustomize versionCreate Example Application
# Create project directories
mkdir -p k8s-demo/{helm,kustomize}
cd k8s-demo
# Simple Nginx service (base-deployment.yaml and Service)
cat > base-deployment.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
EOF
cat > service.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOFHelm Hands‑On
Create Helm Chart
# Scaffold a chart
cd helm
helm create nginx-chart
# Chart directory layout
# ├── Chart.yaml # metadata
# ├── values.yaml # default values
# ├── charts/ # sub‑chart dependencies
# └── templates/ # Kubernetes manifests
# ├── deployment.yaml
# ├── service.yaml
# └── ...Edit Chart.yaml
apiVersion: v2
name: nginx-chart
description: A Helm chart for Nginx application
type: application
version: 1.0.0
appVersion: "1.21"
keywords:
- nginx
- web
maintainers:
- name: DevOps Team
email: [email protected]Define values.yaml
# Default values
replicaCount: 2
image:
repository: nginx
tag: "1.21"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
ingress:
enabled: false
className: nginx
annotations: {}
hosts:
- host: nginx.example.com
paths:
- path: /
pathType: Prefix
tls: []
env:
name: production
domain: example.com
configMap:
enabled: false
data: {}
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 80Helm Templates (deployment.yaml example)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nginx-chart.fullname" . }}
labels:
{{- include "nginx-chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "nginx-chart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "nginx-chart.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.configMap.enabled }}
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
{{- end }}
{{- if .Values.configMap.enabled }}
volumes:
- name: config
configMap:
name: {{ include "nginx-chart.fullname" . }}
{{- end }}Helm Service Template (service.yaml example)
apiVersion: v1
kind: Service
metadata:
name: {{ include "nginx-chart.fullname" . }}
labels:
{{- include "nginx-chart.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "nginx-chart.selectorLabels" . | nindent 4 }}Helm Deployment Commands
# Lint the chart
helm lint nginx-chart/
# Render templates for inspection (no actual deployment)
helm template nginx-chart/ -f nginx-chart/values-dev.yaml
# Deploy to development environment
helm install nginx-dev nginx-chart/ \
-f nginx-chart/values-dev.yaml \
--namespace dev \
--create-namespace
# Deploy to production environment
helm install nginx-prod nginx-chart/ \
-f nginx-chart/values-prod.yaml \
--namespace prod \
--create-namespace
# List releases
helm list -A
# Upgrade
helm upgrade nginx-prod nginx-chart/ -f nginx-chart/values-prod.yaml --namespace prod
# Rollback
helm rollback nginx-prod 1 --namespace prod
# Uninstall
helm uninstall nginx-prod --namespace prodKustomize Hands‑On
Create Kustomize Structure
# Directory layout
mkdir -p base overlays/{dev,staging,prod}
# Base contains common resources
# overlays contain environment‑specific patchesBase Resources
# base/deployment.yaml (same as the simple Nginx deployment shown earlier)
# base/service.yaml (same as the Service shown earlier)
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app: nginx
managed-by: kustomize
commonAnnotations:
version: "1.0.0"
images:
- name: nginx
newTag: "1.21"Overlay for Development
# overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: dev
namePrefix: dev-
commonLabels:
environment: dev
replicas:
- name: nginx
count: 1
images:
- name: nginx
newTag: "1.21-alpine"
patches:
- target:
kind: Deployment
name: nginx
patch: |-
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: "32Mi"
- op: replace
path: /spec/template/spec/containers/0/resources/requests/cpu
value: "50m"Overlay for Production
# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: prod
namePrefix: prod-
commonLabels:
environment: prod
replicas:
- name: nginx
count: 5
images:
- name: nginx
newTag: "1.21"
resources:
- ingress.yaml
- hpa.yaml
patchesStrategicMerge:
- replica-patch.yaml
patches:
- target:
kind: Deployment
name: nginx
patch: |-
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: "128Mi"
- op: replace
path: /spec/template/spec/containers/0/resources/limits/memory
value: "256Mi"Production Ingress and HPA (examples)
# overlays/prod/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- nginx.example.com
secretName: nginx-tls
rules:
- host: nginx.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80 # overlays/prod/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 5
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70Kustomize Deployment Commands
# Preview generated YAML (no actual apply)
kubectl kustomize overlays/dev/
# Deploy to development
kubectl apply -k overlays/dev/
# Deploy to production
kubectl apply -k overlays/prod/
# View resources
kubectl get all -n dev
kubectl get all -n prod
# Show differences
kubectl diff -k overlays/prod/
# Delete resources
kubectl delete -k overlays/dev/
# Build with standalone kustomize binary
kustomize build overlays/prod/ | kubectl apply -f -Comparison and Recommendation
Design philosophy : Helm = template + package; Kustomize = declarative patches + overlays.
Learning curve : Helm is steeper due to Go template syntax; Kustomize is gentle (plain YAML).
Configuration reuse : Helm uses values.yaml parameterization; Kustomize uses base + overlay layering.
Version management : Helm provides built‑in release tracking and rollback; Kustomize relies on GitOps tools for versioning.
Package distribution : Helm publishes charts to repositories (ArtifactHub, etc.); Kustomize shares configurations via Git.
YAML readability : Helm templates can obscure raw YAML; Kustomize keeps native YAML, high readability.
Conditional logic : Helm supports if/else/range; Kustomize achieves conditions by separate overlays.
Dependency management : Helm supports chart dependencies; Kustomize does not have native dependency handling.
Community ecosystem : Helm has a rich ecosystem with thousands of public charts; Kustomize community is smaller but growing.
Tool integration : Helm integrates with many CI/CD plugins and IDEs; Kustomize integrates tightly with kubectl, ArgoCD, and Flux.
Debug difficulty : Helm template errors can be hard to trace; Kustomize patch errors are more visible.
Recommended Scenarios
Choose Helm when you need to package and distribute reusable components, require complex templating, or want built‑in release/rollback support.
Choose Kustomize for multi‑environment deployments, GitOps workflows, or when native YAML readability is a priority.
Combine both: use Helm for third‑party infrastructure charts (e.g., Prometheus, Ingress controllers) and Kustomize for custom micro‑service overlays, or render Helm templates first and then apply Kustomize patches for environment‑specific tweaks.
Best Practices and Caveats
Helm Best Practices
Leverage chart dependencies for complex applications.
Validate values with a JSON schema (values.schema.json).
Use Helm hooks for lifecycle tasks such as DB migrations.
Avoid excessive template logic; prefer multiple values files for different environments.
Lock dependency versions with Chart.lock and commit it to VCS.
Never store secrets in values.yaml; use Kubernetes Secrets or external vaults.
Kustomize Best Practices
Use components to enable/disable features.
Apply replacements for dynamic config injection.
Generate ConfigMaps and Secrets with configMapGenerator and secretGenerator.
Ensure patch paths exactly match the target resource fields.
Be aware that namePrefix / nameSuffix affect resource references.
Keep overlay hierarchy shallow to avoid complexity.
Common Errors and Solutions
Helm template render failure : Run helm lint and helm template --debug to locate syntax issues.
Kustomize patch not applied : Verify the patch target and path; use kubectl kustomize to preview.
Helm upgrade loses values : Use --reuse-values or explicitly pass the required values files.
Kustomize resource order problems : kubectl apply -k automatically resolves dependencies.
Chart dependency version conflict : Align versions across charts or use aliasing.
Troubleshooting
Helm Troubleshooting
# Check version compatibility
helm version
kubectl version
# Inspect releases
helm list -A
helm status myapp -n prod
helm history myapp -n prod
# Dry‑run and debug
helm install myapp ./mychart --dry-run --debug -f values-prod.yaml
helm template myapp ./mychart --debug -f values-prod.yaml
# View rendered manifest and values
helm get manifest myapp -n prod
helm get values myapp -n prod
helm lint ./mychartKustomize Troubleshooting
# Preview generated YAML
kubectl kustomize overlays/prod/
# Validate syntax and configuration
kustomize build overlays/prod/ --enable-alpha-plugins
kustomize config lint overlays/prod/
# Show resource tree
kustomize cfg tree overlays/prod/
# Diff before applying
kubectl diff -k overlays/prod/CI/CD Integration
Helm CI/CD (GitLab example)
# .gitlab-ci.yml (excerpt)
stages:
- lint
- test
- build
- deploy
variables:
HELM_VERSION: "3.12.0"
CHART_PATH: "./charts/myapp"
lint:chart:
stage: lint
image: alpine/helm:${HELM_VERSION}
script:
- helm lint ${CHART_PATH}
- helm template ${CHART_PATH} --debug
test:chart:
stage: test
image: alpine/helm:${HELM_VERSION}
script:
- helm install test-release ${CHART_PATH} --dry-run --debug
- helm unittest ${CHART_PATH}
build:chart:
stage: build
image: alpine/helm:${HELM_VERSION}
script:
- helm package ${CHART_PATH}
- helm push myapp-*.tgz oci://${CI_REGISTRY}/charts
only:
- main
- tags
deploy:production:
stage: deploy
image: alpine/helm:${HELM_VERSION}
script:
- helm upgrade --install myapp oci://${CI_REGISTRY}/charts/myapp \
--version ${CI_COMMIT_TAG} \
-f values-prod.yaml \
--namespace prod --create-namespace --wait --timeout 10m
only:
- tags
when: manual
environment:
name: production
url: https://myapp.example.comKustomize CI/CD (GitHub Actions example)
# .github/workflows/deploy.yml (excerpt)
name: Deploy with Kustomize
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Kustomize
uses: imranismail/setup-kustomize@v2
- name: Validate Overlays
run: |
kustomize build overlays/dev/
kustomize build overlays/prod/
- name: Kubeval
uses: instrumenta/kubeval-action@master
with:
files: overlays/prod/
deploy-dev:
runs-on: ubuntu-latest
needs: validate
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v3
- name: Setup kubectl
uses: azure/setup-kubectl@v3
- name: Deploy to Development
run: |
echo "${{ secrets.KUBECONFIG_DEV }}" > kubeconfig
export KUBECONFIG=kubeconfig
kubectl apply -k overlays/dev/
kubectl rollout status deployment/dev-nginx -n dev
deploy-prod:
runs-on: ubuntu-latest
needs: validate
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
environment:
name: production
url: https://myapp.example.com
steps:
- uses: actions/checkout@v3
- name: Setup kubectl
uses: azure/setup-kubectl@v3
- name: Deploy to Production
run: |
echo "${{ secrets.KUBECONFIG_PROD }}" > kubeconfig
export KUBECONFIG=kubeconfig
kubectl apply -k overlays/prod/
kubectl rollout status deployment/prod-nginx -n prod --timeout=10mMonitoring
Helm Release Monitoring (PrometheusRule example)
# prometheus-rules.yaml (excerpt)
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: helm-release-rules
spec:
groups:
- name: helm_releases
interval: 30s
rules:
- alert: HelmReleaseFailed
expr: kube_persistentvolumeclaim_status_phase{phase="Failed"} > 0
for: 5m
labels:
severity: critical
annotations:
summary: "Helm release failed"
description: "Helm release {{ $labels.release }} in namespace {{ $labels.namespace }} is in failed state"Kustomize / GitOps Monitoring (ArgoCD notifications example)
# argocd-notifications-cm.yaml (excerpt)
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
data:
trigger.on-sync-failed: |
- when: app.status.operationState.phase in ['Error', 'Failed']
send: [app-sync-failed]
template.app-sync-failed: |
message: |
Application {{.app.metadata.name}} sync failed!
{{.app.status.operationState.message}}
webhook:
slack:
method: POST
body: |
{"text": "ArgoCD Sync Failed: {{.app.metadata.name}}"}Conclusion
Helm provides powerful templating, versioned releases, and a rich chart ecosystem, making it ideal for packaging reusable components and handling complex configuration logic. Kustomize preserves native YAML, offers a gentle learning curve, and integrates seamlessly with GitOps pipelines, excelling in multi‑environment scenarios. By combining Helm for third‑party charts and Kustomize for custom service overlays, teams can leverage the strengths of both tools to achieve robust, maintainable, and observable Kubernetes application deployments.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
