Cloud Native 20 min read

Helm vs Kustomize: When to Choose Each Tool and How to Combine Them

This article objectively compares Helm and Kustomize based on three years of team experience, detailing design philosophies, core mechanisms, feature differences, practical use‑case recommendations, mixed‑usage patterns, and best‑practice guidelines for GitOps‑driven Kubernetes deployments.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Helm vs Kustomize: When to Choose Each Tool and How to Combine Them

Overview

New team members often ask whether to use Helm or Kustomize for managing Kubernetes manifests. The answer is to use both, selecting the appropriate tool for each scenario. This guide provides an objective, experience‑driven comparison rather than a "which is better" debate.

Background

Team size: 15 engineers managing 80+ micro‑services

K8s clusters: 3 environments (dev, staging, prod) across 5 clusters

Application types: custom services and third‑party components

Core Takeaways

Helm excels at handling complex third‑party applications and highly parameterised configurations.

Kustomize excels at managing in‑house services and environment‑specific overlays.

The two tools can be combined; they are not mutually exclusive.

Design‑Philosophy Comparison

Helm: Template Engine

Helm uses Go templates to turn YAML files into parameterised templates, injecting values via values.yaml. Example chart template:

# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    {{- include "mychart.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "mychart.selectorLabels" . | nindent 4 }}
  template:
    metadata:
      labels:
        {{- include "mychart.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        ports:
        - containerPort: {{ .Values.service.port }}
        {{- if .Values.resources }}
        resources:
          {{- toYaml .Values.resources | nindent 12 }}
        {{- end }}

Corresponding values.yaml:

replicaCount: 3
image:
  repository: my-app
  tag: "1.0.0"
service:
  port: 8080
resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 256Mi

Kustomize: Overlay Patches

Kustomize works with native YAML (no templating) and applies patches on top of a base configuration.

# base/deployment.yaml (plain YAML)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:latest
        ports:
        - containerPort: 8080

Base kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml

Production overlay adds a name prefix, common labels, replica count, image tag, and a strategic merge patch:

# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namePrefix: prod-
commonLabels:
  env: prod
replicas:
- name: my-app
  count: 5
images:
- name: my-app
  newTag: v1.2.3
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/resources
      value:
        requests:
          cpu: 500m
          memory: 512Mi
        limits:
          cpu: 2000m
          memory: 2Gi
  target:
    kind: Deployment
    name: my-app

Fundamental Differences

Core mechanism : Helm – template rendering; Kustomize – patch overlay.

Base files : Helm – templated (non‑valid YAML); Kustomize – native YAML.

Configuration : Helm – values.yaml; Kustomize – overlay directories.

Learning curve : Helm – steeper (Go templates); Kustomize – gentler.

Debug difficulty : Helm – higher; Kustomize – lower.

Flexibility : Helm – very high; Kustomize – moderate.

Feature Comparison

Helm‑only Capabilities

Lifecycle management : install, upgrade, rollback, uninstall, history.

# Install
helm install my-release ./mychart
# Upgrade
helm upgrade my-release ./mychart --set image.tag=v2.0
# Rollback
helm rollback my-release 1
# Uninstall
helm uninstall my-release
# History
helm history my-release

Hooks : run jobs before/after install/upgrade.

apiVersion: batch/v1
kind: Job
metadata:
  name: "{{ .Release.Name }}-db-migration"
  annotations:
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      containers:
      - name: migrate
        image: my-app:{{ .Values.image.tag }}
        command: ["./migrate.sh"]
      restartPolicy: Never

Chart dependencies : declare dependent charts.

# Chart.yaml
dependencies:
- name: postgresql
  version: "12.x.x"
  repository: "https://charts.bitnami.com/bitnami"
  condition: postgresql.enabled
- name: redis
  version: "17.x.x"
  repository: "https://charts.bitnami.com/bitnami"
  condition: redis.enabled

Chart repository : add, search, install third‑party charts.

# Add repo
helm repo add bitnami https://charts.bitnami.com/bitnami
# Search
helm search repo postgresql
# Install
helm install my-pg bitnami/postgresql --values custom-values.yaml

Kustomize‑only Capabilities

Native kubectl integration (since v1.14):

# Apply overlay
kubectl apply -k ./overlays/prod/
# Preview generated YAML
kubectl kustomize ./overlays/prod/

Strategic Merge Patch merges fields instead of overwriting:

# base/deployment.yaml
spec:
  template:
    spec:
      containers:
      - name: app
        env:
        - name: LOG_LEVEL
          value: info
# overlays/prod/patch.yaml
spec:
  template:
    spec:
      containers:
      - name: app
        env:
        - name: LOG_LEVEL
          value: warn
        - name: ENABLE_METRICS
          value: "true"

Components (reusable pieces) (Kustomize 1.4+):

# components/monitoring/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/ports/-
      value:
        name: metrics
        containerPort: 9090
  target:
    kind: Deployment

# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../components/monitoring

Replacements (cross‑resource variable substitution, Kustomize 5.0+) :

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- configmap.yaml
replacements:
- source:
    kind: ConfigMap
    name: my-config
    fieldPath: metadata.name
  targets:
  - select:
      kind: Deployment
      name: my-app
    fieldPaths:
    - spec.template.spec.containers.0.envFrom.0.configMapRef.name

Practical Use‑Case Analysis

Scenario 1 – Deploying middleware (MySQL, Redis, etc.)

Recommended: Helm because official charts already encapsulate complex configuration, HA, backup, and monitoring. Example:

# Deploy MySQL HA cluster
helm install mysql bitnami/mysql \
  --set architecture=replication \
  --set auth.rootPassword=mypassword \
  --set secondary.replicaCount=2 \
  --set metrics.enabled=true

Using Kustomize to write the same from scratch would be labor‑intensive.

Scenario 2 – Deploying custom micro‑services

Recommended: Kustomize – the services are simple, environment differences are limited to replica count, image tag, and resource limits. A typical directory layout:

my-service/
├── base/
│   ├── kustomization.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   └── configmap.yaml
├── overlays/
│   ├── dev/
│   │   └── kustomization.yaml
│   ├── staging/
│   │   └── kustomization.yaml
│   └── prod/
│       ├── kustomization.yaml
│       └── patch-resources.yaml

Scenario 3 – Shared standard configuration across teams

Recommended: Helm – create a "standard micro‑service chart" that teams customise via a minimal values.yaml:

# Standard chart values.yaml (template)
appName: ""
image:
  repository: ""
  tag: ""
  pullPolicy: IfNotPresent
replicas: 2
service:
  type: ClusterIP
  port: 8080
ingress:
  enabled: false
  hosts: []
resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi
probes:
  liveness:
    path: /health
    initialDelaySeconds: 30
  readiness:
    path: /ready
    initialDelaySeconds: 5
metrics:
  enabled: true
  port: 9090
  path: /metrics

Team‑specific values.yaml only overrides what is needed:

# Team values.yaml
appName: user-service
image:
  repository: registry.example.com/user-service
  tag: v1.2.3
replicas: 3
ingress:
  enabled: true
  hosts:
  - user.api.example.com

Scenario 4 – GitOps workflow

Recommended: Kustomize (or Helm + Kustomize) because ArgoCD/Flux support both, but Kustomize’s overlay model aligns naturally with GitOps.

# ArgoCD Application (Kustomize)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
spec:
  source:
    repoURL: https://github.com/myorg/my-app.git
    path: overlays/prod
    targetRevision: main
  destination:
    server: https://kubernetes.default.svc
    namespace: my-app

When Helm is needed, ArgoCD can also reference a chart directly.

Mixed‑Usage Patterns

Pattern 1 – Helm packaging + Kustomize customisation

Render a Helm chart to plain YAML, then let Kustomize handle environment‑specific patches:

# Render Helm chart
helm template my-release bitnami/postgresql \
  --values base-values.yaml \
  --output-dir ./base/

# Directory layout after rendering
postgresql/
├── base/
│   ├── kustomization.yaml
│   └── templates/…
├── overlays/
│   ├── dev/kustomization.yaml
│   └── prod/kustomization.yaml

Benefits:

Leverage Helm’s templating and dependency management.

Use Kustomize to avoid maintaining multiple values.yaml files.

Pattern 2 – Kustomize’s HelmChart Inflator (Kustomize 4.1+)

Kustomize can directly pull a Helm chart as a resource:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: postgresql
  repo: https://charts.bitnami.com/bitnami
  version: 12.1.0
  releaseName: my-pg
  namespace: database
  valuesFile: values.yaml
patches:
- patch: |-
    - op: replace
      path: /spec/replicas
      value: 3
  target:
    kind: StatefulSet
    name: my-pg-postgresql

# Build with Helm support enabled
kustomize build --enable-helm ./

Team Collaboration Best Practices

Git Repository Organisation

Use a monorepo to store all Kubernetes manifests:

k8s-manifests/
├── apps/            # Custom services (Kustomize)
│   ├── user-service/
│   │   ├── base/
│   │   └── overlays/
│   └── order-service/
├── infrastructure/  # Third‑party components (Helm)
│   ├── monitoring/
│   │   ├── prometheus/
│   │   └── grafana/
│   └── logging/
├── platform/        # Mixed components
│   ├── ingress-nginx/
│   └── cert-manager/
└── clusters/        # Cluster‑level configs (dev, staging, prod)
    ├── dev/
    ├── staging/
    └── prod/

CI/CD Pipeline (GitLab example)

stages:
- validate
- build
- deploy

validate:
  stage: validate
  script:
    # Validate Kustomize overlays
    - for dir in apps/*/overlays/*; do echo "Validating $dir"; kubectl kustomize $dir > /dev/null; done
    # Lint Helm charts
    - for chart in infrastructure/*/; do if [ -f "$chart/Chart.yaml" ]; then echo "Linting $chart"; helm lint $chart; fi; done

deploy-dev:
  stage: deploy
  script:
    - argocd app sync my-app-dev --prune
  only:
    - main

Code Review Checklist

YAML syntax passes CI checks.

Kustomize build succeeds.

Helm lint passes.

Image tags are explicit (no latest).

Resource limits are set.

No hard‑coded secrets.

Changes verified in dev environment.

Documentation Standards

Each application directory should contain a README describing directory structure and deployment commands, e.g.:

# User Service
## Directory structure
- base/          # Base manifests
- overlays/dev/   # Development overlay
- overlays/staging/
- overlays/prod/

## Deploy
# Deploy to dev
kubectl apply -k overlays/dev/
# Deploy to prod via ArgoCD
argocd app sync user-service-prod

Configuration Details

Typical parameter differences across environments:

Replica count: dev 1, staging 2, prod 5.

CPU request: 100m → 200m → 500m.

Memory request: 128Mi → 256Mi → 512Mi.

Common Issues & Solutions

Problem 1 – Helm values files proliferate

Solution: use helmfile to manage multiple releases and environment‑specific values.

Problem 2 – Kustomize patches are painful

Solution: prefer Strategic Merge Patches or use reusable components to reduce JSON‑Patch boilerplate.

Problem 3 – Secret management

Do not store plaintext secrets. Options:

Sealed Secrets : encrypt secrets before committing.

kubeseal --cert cert.pem < secret.yaml > sealed-secret.yaml

External Secrets Operator : sync from external secret stores.

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: my-secret
spec:
  secretStoreRef:
    name: aws-secrets-manager
    kind: SecretStore
  target:
    name: my-secret
  data:
  - secretKey: password
    remoteRef:
      key: prod/my-app/db-password

Conclusion

Both Helm and Kustomize have distinct strengths. Helm shines for third‑party charts and complex templating, while Kustomize offers simplicity and native YAML for in‑house services. Teams should adopt a unified strategy—prefer Kustomize for custom micro‑services, Helm for external components, and combine them when needed—while integrating the workflow with GitOps tools such as ArgoCD or Flux.

KubernetesConfiguration ManagementGitOpsKustomize
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.