How to Build a Production‑Ready GitOps Pipeline with ArgoCD and Helm in 10 Minutes
This step‑by‑step guide shows how to set up a full‑stack GitOps workflow using ArgoCD and Helm on Kubernetes, covering prerequisites, environment matrix, a 10‑step implementation checklist, monitoring, performance tuning, security hardening, common troubleshooting, rollback scripts, and best‑practice recommendations.
ArgoCD+Helm GitOps: From Zero to Production‑Ready Continuous Delivery in 10 Minutes
Applicable Scenarios & Prerequisites
Applicable scenarios : multi‑cluster / multi‑environment Kubernetes application delivery (3+ clusters), audit traceability, automatic rollback, configuration‑drift correction, team collaboration with change‑approval and permission control.
Prerequisites : Kubernetes 1.24+ (management cluster + workload clusters), Helm 3.10+, a Git repository that supports webhooks, RBAC enabled with cluster-admin rights for the installation phase, and at least 2 CPU / 4 GiB resources for ArgoCD components.
Environment & Version Matrix
Component
Version Requirement
Key Feature Dependencies
Minimum Resource Specs
Kubernetes
1.24‑1.30
CRD v1, PodSecurity admission
Management cluster 2C4G, workload clusters on‑demand
ArgoCD
2.9+
ApplicationSet, multi‑cluster management, SSO integration
2C4G (includes Redis/Repo Server)
Helm
3.10+
OCI registry support, dependency lock
-
Git
2.30+
Webhook, deploy key / token
-
Redis
7.0+
Cache and session store for ArgoCD
100 mCPU / 128 Mi
OS
RHEL 8+ / Ubuntu 22.04+
-
-
Quick Checklist
Step 1: Install ArgoCD core components on the management cluster
Step 2: Add workload clusters
Step 3: Connect Git repository (SSH key or HTTPS token)
Step 4: Create Helm‑based Application CRD
Step 5: Configure automated sync and self‑heal
Step 6: Set up RBAC and SSO (OIDC/LDAP)
Step 7: Integrate CI pipeline to trigger deployments
Step 8: Configure webhook and automatic rollback
Step 9: Manage multiple environments with ApplicationSet
Step 10: Monitoring, alerting and audit logging
Implementation Steps
Step 1: Install ArgoCD Core Components
Goal : Deploy the ArgoCD control plane on the management cluster.
# Create namespace
kubectl create namespace argocd
# Install ArgoCD (official manifest)
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.3/manifests/install.yaml
# Wait for all Pods to become Ready
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=5m
# Verify component status
kubectl get pods -n argocd
# Expected output:
# argocd-application-controller-xxx 1/1 Running
# argocd-repo-server-xxx 1/1 Running
# argocd-server-xxx 1/1 Running
# argocd-redis-xxx 1/1 Running
# argocd-dex-server-xxx 1/1 Running (SSO component)Key parameters : argocd-server: API server & UI entry, default ports 8080 (HTTP) and 8083 (gRPC) argocd-repo-server: Git fetch and Helm rendering engine argocd-application-controller: Watches Application CRD and performs sync
Expose ArgoCD UI (production) : Use Ingress + TLS.
# Temporary port‑forward for testing
kubectl port-forward svc/argocd-server -n argocd 8080:443 &
# Retrieve initial admin password
ARGOCD_ADMIN_PASS=$(kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d)
echo "ArgoCD Admin Password: $ARGOCD_ADMIN_PASS"
# Login via CLI
argocd login localhost:8080 --username admin --password $ARGOCD_ADMIN_PASS --insecure
# Change admin password (required for production)
argocd account update-password --current-password $ARGOCD_ADMIN_PASS --new-password 'NewSecurePass@2024'Step 2: Add Workload Clusters
Goal : Configure ArgoCD to manage multiple target clusters.
# List clusters in current kubeconfig
kubectl config get-contexts
# Add a target cluster (example name prod-cluster)
argocd cluster add prod-cluster --name prod-k8s
# Verify registration
argocd cluster list
# Expected output includes the new cluster entryManual secret configuration (advanced) :
# cluster-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: prod-cluster-secret
namespace: argocd
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: prod-k8s
server: https://prod-cluster-api.example.com:6443
config: |
{
"bearerToken": "your-service-account-token",
"tlsClientConfig": {"insecure": false, "caData": "base64-encoded-ca-cert"}
} kubectl apply -f cluster-secret.yaml
argocd cluster get https://prod-cluster-api.example.com:6443Step 3: Configure Git Repository Access
Goal : Connect the Git repository that stores Helm charts and Kubernetes manifests.
Method 1 – SSH Key (recommended for private repos)
# Generate SSH key (no passphrase)
ssh-keygen -t ed25519 -C "[email protected]" -f ~/.ssh/argocd_deploy_key -N ""
# Add the public key to the Git repo (Deploy keys)
cat ~/.ssh/argocd_deploy_key.pub
# Register the repo in ArgoCD
argocd repo add [email protected]:myorg/k8s-manifests.git \
--ssh-private-key-path ~/.ssh/argocd_deploy_key \
--insecure-ignore-host-key
# Verify connection
argocd repo listMethod 2 – HTTPS Token (for public repos or token auth)
# GitHub personal access token (repo scope required)
GH_TOKEN="ghp_xxxxxxxxxxxx"
argocd repo add https://github.com/myorg/k8s-manifests.git \
--username git --password $GH_TOKEN
# Verify
argocd repo listStep 4: Create Application (Helm Chart Deployment)
Goal : Define an Application CRD that declaratively manages the app lifecycle.
# app-prod.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp-prod
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests.git
targetRevision: main
path: charts/myapp
helm:
releaseName: myapp
valueFiles:
- values-prod.yaml
parameters:
- name: image.tag
value: "v1.2.3"
- name: replicaCount
value: "5"
valuesObject:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
destination:
server: https://prod-cluster-api.example.com:6443
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Apply the Application
kubectl apply -f app-prod.yaml
# Trigger immediate sync
argocd app sync myapp-prod
# Wait for health and sync completion
argocd app wait myapp-prod --health --timeout 600
# Verify status
argocd app get myapp-prod
# Expected output includes Health: Healthy, Sync: SyncedStep 5: Automated Sync & Self‑Heal
Goal : Reduce manual intervention by configuring automated policies.
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PruneLast=true # Apply new resources before deleting old ones
- ApplyOutOfSyncOnly=true
- ServerSideApply=trueValidate self‑heal : Manually scale a deployment and observe ArgoCD restoring the desired replica count.
# Simulate drift
kubectl scale deployment myapp -n production --replicas=10
# Observe ArgoCD auto‑recovery (≈3‑5 min)
argocd app get myapp-prod --refreshStep 6: RBAC & SSO Integration
Goal : Implement fine‑grained permission control and enterprise identity authentication.
Configure RBAC policy
# Edit argocd-rbac-cm ConfigMap
kubectl edit configmap argocd-rbac-cm -n argocd
# Example policy.csv snippet
policy.default: role:readonly
policy.csv: |
p, role:dev-team, applications, get, */*, allow
p, role:dev-team, applications, sync, dev/*, allow
p, role:ops-admin, applications, *, */*, allow
p, role:ops-admin, clusters, *, *, allow
g, [email protected], role:dev-team
g, ops-team, role:ops-adminValidate RBAC :
# Login as dev user
argocd login argocd.example.com --username [email protected]
# Attempt to sync production app (should be denied)
argocd app sync myapp-prod
# Expected: permission deniedIntegrate OIDC/LDAP (Keycloak example)
# Edit argocd-cm ConfigMap
kubectl edit configmap argocd-cm -n argocd
# Add OIDC block
oidc.config: |
name: Keycloak
issuer: https://keycloak.example.com/auth/realms/master
clientID: argocd
clientSecret: $oidc.keycloak.clientSecret
requestedScopes: ["openid","profile","email","groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
# Create OIDC secret
kubectl create secret generic argocd-oidc-secret -n argocd \
--from-literal=oidc.keycloak.clientSecret='your-client-secret'
# Restart server to apply
kubectl rollout restart deployment argocd-server -n argocd
# Test SSO login
argocd login argocd.example.com --ssoStep 7: CI Pipeline Trigger
Goal : Automatically update image tags from CI and trigger GitOps.
Option 1 – ArgoCD Image Updater
# Install Image Updater
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/v0.12.0/manifests/install.yaml
# Annotate the Application
kubectl annotate app myapp-prod -n argocd \
argocd-image-updater.argoproj.io/image-list=myapp=myregistry.com/myapp:~1.2 \
argocd-image-updater.argoproj.io/write-back-method=git \
argocd-image-updater.argoproj.io/git-branch=main
# Watch logs for automatic updates
kubectl logs -n argocd deployment/argocd-image-updater -fOption 2 – GitHub Actions
# .github/workflows/deploy.yml
name: Deploy to ArgoCD
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Update image tag in values
run: |
NEW_TAG="${GITHUB_SHA:0:7}"
sed -i "s/tag: .*/tag: $NEW_TAG/" charts/myapp/values-prod.yaml
git config user.name "GitHub Actions"
git config user.email "[email protected]"
git add charts/myapp/values-prod.yaml
git commit -m "Update image to $NEW_TAG"
git push
- name: (optional) Trigger ArgoCD sync
run: |
argocd login argocd.example.com --auth-token ${{ secrets.ARGOCD_TOKEN }}
argocd app sync myapp-prod --prune --forceStep 8: Webhook & Automatic Rollback
Goal : Immediate sync on Git push and automatic rollback on failure.
Configure Git webhook
# In Git repo settings, add webhook:
# URL: https://argocd.example.com/api/webhook
# Content‑Type: application/json
# Secret: (retrieve from ArgoCD secret)
# Retrieve webhook secret
kubectl get secret argocd-secret -n argocd -o jsonpath='{.data.webhook.github.secret}' | base64 -d
# Test webhook manually
curl -X POST https://argocd.example.com/api/webhook \
-H "Content-Type: application/json" \
-H "X-Hub-Signature: sha256=xxx" \
-d '{"repository":{"clone_url":"https://github.com/myorg/k8s-manifests.git"}}'Automatic rollback configuration (in Application)
spec:
syncPolicy:
automated:
selfHeal: true
retry:
limit: 3
backoff:
duration: 5s
# Optional health check script (Lua) can be added hereStep 9: Multi‑Environment Management (ApplicationSet)
Goal : Use ApplicationSet to generate Applications for dev, staging, and prod.
# applicationset-multienv.yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: myapp-all-envs
namespace: argocd
spec:
generators:
- list:
elements:
- env: dev
cluster: https://kubernetes.default.svc
namespace: dev
replicaCount: "1"
- env: staging
cluster: https://staging-cluster.example.com:6443
namespace: staging
replicaCount: "2"
- env: prod
cluster: https://prod-cluster.example.com:6443
namespace: production
replicaCount: "5"
template:
metadata:
name: 'myapp-{{env}}'
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests.git
targetRevision: main
path: charts/myapp
helm:
releaseName: myapp
valueFiles:
- 'values-{{env}}.yaml'
parameters:
- name: replicaCount
value: '{{replicaCount}}'
destination:
server: '{{cluster}}'
namespace: '{{namespace}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true # Apply the ApplicationSet
kubectl apply -f applicationset-multienv.yaml
# Verify generated Applications
argocd app list | grep myappStep 10: Monitoring, Alerting & Audit Logs
Prometheus metrics
# servicemonitor-argocd.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-metrics
namespace: argocd
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server
endpoints:
- port: metrics
interval: 30sKey PromQL queries :
# Sync failure rate (>5%)
sum(rate(argocd_app_sync_total{phase="Failed"}[5m]))
/ sum(rate(argocd_app_sync_total[5m])) * 100
# Applications not healthy
argocd_app_info{health_status!="Healthy"}
# Number of OutOfSync apps (alert threshold >10)
count(argocd_app_info{sync_status="OutOfSync"})Grafana dashboard
# Import official ArgoCD dashboard (ID: 14584)
curl -O https://grafana.com/api/dashboards/14584/revisions/1/download
# Key panels: Application Health by environment, Sync Success Rate (24h), Top 10 OutOfSync apps, Repository Pull ErrorsAlerting rules (Prometheus)
# prometheus-rules.yaml
groups:
- name: argocd
interval: 30s
rules:
- alert: ArgoCDAppOutOfSync
expr: argocd_app_info{sync_status="OutOfSync"} > 0
for: 10m
labels:
severity: warning
annotations:
summary: "Application {{ $labels.name }} out of sync for 10+ minutes"
- alert: ArgoCDSyncFailure
expr: rate(argocd_app_sync_total{phase="Failed"}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "ArgoCD sync failure rate >5%"Performance & Capacity
Metric
Target Value
Test Method
Application sync time (small scale)
<30 s
10 Deployments, 3 replicas each
Application sync time (large scale)
<5 min
100+ resources (CRDs, ConfigMaps, Secrets)
Controller memory usage (100 apps)
<2 Gi
kubectl top pod -n argocd -l app.kubernetes.io/component=application-controllerRepo server CPU
<1 core
Single Helm render of 100+ templates
Concurrent sync capacity
10 apps/min
Stress test with
argocd-stress-testSecurity & Compliance
Least‑privilege RBAC
# ServiceAccount for a specific app namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: argocd-app-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-app-role
namespace: production
rules:
- apiGroups: ["apps", ""]
resources: ["deployments","services","configmaps","secrets"]
verbs: ["get","list","watch","create","update","patch","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-app-binding
namespace: production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argocd-app-role
subjects:
- kind: ServiceAccount
name: argocd-app-sa
namespace: productionEncrypt Git credentials
# Seal a Git token with Sealed Secrets
echo -n 'ghp_xxxxxxxxxxxxx' | \
kubectl create secret generic git-token -n argocd \
--from-file=password=/dev/stdin --dry-run=client -o yaml | \
kubeseal -o yaml > sealed-git-token.yaml
kubectl apply -f sealed-git-token.yamlEnable audit logging
# Edit argocd-server deployment
spec:
template:
spec:
containers:
- name: argocd-server
command:
- argocd-server
- --audit-log-enabled=true
- --audit-log-path=/var/log/argocd/audit.logCommon Issues & Troubleshooting
Symptom
Diagnostic Command
Possible Root Cause
Quick Fix
Permanent Fix
Application stuck in Progressing argocd app get <app> --show-operation Helm render failure or resources not ready
Check logs: argocd app logs <app> Fix chart templates or add health‑check delay
Git repository connection failure argocd repo list Expired SSH key or network issue
Re‑add deploy key or token
Validate webhook and network connectivity
Resources deleted after sync (prune) argocd app diff <app> Files removed from Git
Restore from backup or re‑apply
Disable prune: true or add protection annotations
SSO login failure kubectl logs -n argocd deployment/argocd-dex-server Incorrect OIDC config or certificate issue
Login with admin account to recover
Verify OIDC issuer URL and callback
Application CRD not effective kubectl get app -n argocd <app> -o yaml | grep status ArgoCD controller malfunction
Restart controller pod
Check resource limits and logs
Helm release stuck argocd app delete <app> --cascade=false Helm state lock
Manually helm uninstall the release
Enable Replace=true in syncPolicy
Change & Rollback Scripts
Production change script
#!/bin/bash
set -euo pipefail
APP_NAME="myapp-prod"
NEW_VERSION="v1.3.0"
# 1. Update image tag in Git
cd k8s-manifests/charts/myapp
sed -i "s/tag: .*/tag: $NEW_VERSION/" values-prod.yaml
git add values-prod.yaml
git commit -m "Deploy $APP_NAME to $NEW_VERSION"
git push origin main
# 2. Trigger ArgoCD sync (or wait for auto‑detect)
argocd app sync $APP_NAME --prune --force
# 3. Monitor sync progress
argocd app wait $APP_NAME --health --timeout 600
# 4. Verify new version
argocd app get $APP_NAME | grep "Sync Status"
kubectl get pods -n production -l app=myapp -o jsonpath='{.items[*].spec.containers[0].image}'
echo "=== Deployment of $NEW_VERSION completed ==="Rollback script
#!/bin/bash
APP_NAME="myapp-prod"
TARGET_REVISION="abc1234" # Git commit SHA to roll back to
# Roll back using history ID (example ID 10)
argocd app rollback $APP_NAME 10
# Or roll back to a specific Git revision
argocd app set $APP_NAME --revision $TARGET_REVISION
argocd app sync $APP_NAME --prune
# Verify rollback
argocd app wait $APP_NAME --health --timeout 300
argocd app get $APP_NAMEBest Practices (10 Items)
Single source of truth: all changes must go through Git PRs; forbid direct kubectl apply on production.
Environment branch strategy: use main for production and a staging branch for pre‑production validation.
Lock Helm chart versions: set appVersion for the application and version for the chart.
Self‑heal cautiously: enable selfHeal in production but pair with ignoreDifferences for fields managed by HPA/VPA.
Pre‑sync hooks: run DB migrations or config validation before a sync.
Progressive delivery: integrate Argo Rollouts for canary/blue‑green deployments.
Multi‑cluster isolation: separate ArgoCD Projects for prod and non‑prod clusters with distinct RBAC.
Audit log retention: enable server audit logs and forward them to a SIEM (ELK, Splunk, etc.).
Image signature verification: use Sigstore Cosign to validate container images.
Regular backups: daily backup of ArgoCD configuration (Applications, AppProjects, Secrets) using Velero.
Ops Community
A leading IT operations community where professionals share and grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
