Cloud Native 26 min read

Argo CD 3.3 Unveiled: PreDelete Hooks, Source Hydrator & Production‑Ready Enhancements

Argo CD 3.3 introduces a suite of production‑ready features—including native PreDelete hooks for safe resource cleanup, an enhanced Source Hydrator with Git notes and inline parameters, automatic OIDC token refresh, shallow Git cloning for large repos, and built‑in KEDA autoscaling—plus detailed upgrade guidance and best‑practice recommendations.

DevOps Coach
DevOps Coach
DevOps Coach
Argo CD 3.3 Unveiled: PreDelete Hooks, Source Hydrator & Production‑Ready Enhancements

Argo CD 3.3 – Core Technical Changes

Version 3.3 adds five major capabilities that affect application lifecycle, source handling, authentication, repository access, and autoscaling.

PreDelete hooks – run a Job before an application’s resources are removed.

Source Hydrator – stores state in Git notes, supports inline parameters and monorepo selective hydration.

OIDC token refresh – proactive background refresh of OIDC tokens.

Shallow Git clones – fetch only recent commits to speed up sync.

KEDA integration – native management of ScaledObject and ScaledJob resources.

PreDelete Hooks

Prior to 3.3 there was no native hook for cleanup before deletion. The new PreDelete hook runs a Job annotated with argocd.argoproj.io/hook: PreDelete. The delete operation is blocked if the hook fails, ensuring safe cleanup.

Basic Example

apiVersion: batch/v1
kind: Job
metadata:
  name: pre-delete-backup-job
  annotations:
    argocd.argoproj.io/hook: PreDelete
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
  backoffLimit: 3
  template:
    metadata:
      labels:
        app: backup-job
    spec:
      serviceAccountName: backup-sa
      containers:
      - name: backup
        image: postgres:15-alpine
        env:
        - name: PGHOST
          value: "postgres-service"
        - name: PGDATABASE
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: database
        - name: PGUSER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username
        - name: PGPASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password
        - name: BACKUP_BUCKET
          value: "s3://my-backups/postgres"
        command:
        - /bin/sh
        - -c
        - |
          echo "Starting pre-delete backup..."
          TIMESTAMP=$(date +%Y%m%d_%H%M%S)
          BACKUP_FILE="/tmp/backup_${TIMESTAMP}.sql"
          pg_dump > ${BACKUP_FILE}
          if [ $? -eq 0 ]; then
            echo "Database backup successful"
            aws s3 cp ${BACKUP_FILE} ${BACKUP_BUCKET}/backup_${TIMESTAMP}.sql
            if [ $? -eq 0 ]; then
              echo "Backup uploaded successfully to S3"
              exit 0
            else
              echo "Failed to upload backup to S3"
              exit 1
            fi
          else
            echo "Database backup failed"
            exit 1
          fi
      restartPolicy: Never

Advanced Pattern – Multiple Hooks with Weights

---
# Hook 1: Notify external systems
apiVersion: batch/v1
kind: Job
metadata:
  name: notify-deletion
  annotations:
    argocd.argoproj.io/hook: PreDelete
    argocd.argoproj.io/hook-weight: "1"
spec:
  template:
    spec:
      containers:
      - name: notify
        image: curlimages/curl:latest
        command:
        - sh
        - -c
        - |
          curl -X POST https://api.monitoring.com/deregister \
            -H "Content-Type: application/json" \
            -d '{"service":"my-app","environment":"production"}'
      restartPolicy: Never
---
# Hook 2: Backup data
apiVersion: batch/v1
kind: Job
metadata:
  name: backup-data
  annotations:
    argocd.argoproj.io/hook: PreDelete
    argocd.argoproj.io/hook-weight: "2"
spec:
  template:
    spec:
      containers:
      - name: backup
        image: backup-tool:latest
      restartPolicy: Never
---
# Hook 3: Cleanup external resources
apiVersion: batch/v1
kind: Job
metadata:
  name: cleanup-external
  annotations:
    argocd.argoproj.io/hook: PreDelete
    argocd.argoproj.io/hook-weight: "3"
spec:
  template:
    spec:
      containers:
      - name: cleanup
        image: cloud-cli:latest
        command:
        - sh
        - -c
        - |
          aws s3 rb s3://app-bucket --force
          aws route53 change-resource-record-sets ...
      restartPolicy: Never

Caveats

Safety : Any hook failure blocks deletion; implement robust error handling.

Hook deletion policy : Use argocd.argoproj.io/hook-delete-policy (e.g., BeforeHookCreation, HookSucceeded, HookFailed).

Timeouts : Set activeDeadlineSeconds for long‑running jobs.

Source Hydrator Improvements

The hydrator now records the latest processed DRY commit in a Git note instead of creating a new commit for each run. This keeps the repository history clean and reduces Git traffic.

Viewing Hydrator Notes

# List all hydrator notes
git notes --ref=refs/notes/argocd-source-hydrator list
# Show note for a specific commit
git notes --ref=refs/notes/argocd-source-hydrator show <commit-sha>

Inline Parameter Support

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
spec:
  source:
    repoURL: https://github.com/myorg/my-repo
    targetRevision: main
    path: apps/my-app
    plugin:
      name: source-hydrator
      env:
      - name: APP_VERSION
        value: "2.1.0"
      - name: REPLICA_COUNT
        value: "3"
      - name: ENVIRONMENT
        value: "production"
      - name: ENABLE_FEATURE_X
        value: "true"

Monorepo Selective Hydration

The hydrator now detects which sub‑paths changed and only hydrates those applications.

monorepo/
├── apps/
│   ├── frontend/
│   │   ├── base/kustomization.yaml
│   │   └── overlays/dev/
│   │   └── overlays/prod/
│   ├── backend/
│   │   └── database/
└── hydrator-config/
    ├── frontend.yaml
    ├── backend.yaml
    └── database.yaml

Behavior Change

Older versions cleared the entire target directory before writing new manifests. The new version only overwrites or creates files that correspond to the current output, preserving any extra files.

OIDC Backend Token Refresh

Argo CD now refreshes OIDC tokens automatically when the remaining lifetime falls below a configurable threshold, preventing unexpected UI log‑outs.

Configuration Example

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
data:
  oidc.config: |
    name: Keycloak
    issuer: https://keycloak.example.com/realms/master
    clientID: argocd
    clientSecret: $oidc.keycloak.clientSecret
    requestedScopes: ["openid","profile","email","groups"]
    refreshTokenThreshold: 300  # seconds (5 min)

Refresh Workflow

Background monitoring : Server watches token expiry timestamps.

Proactive refresh : When remaining time < threshold, a refresh request is sent.

Transparent to users : No UI interruption.

Provider agnostic : Works with Keycloak, Okta, Azure AD, etc.

Shallow Git Clone Support

Shallow clones fetch only the most recent commits, reducing network I/O and storage.

CLI Enabling

# Depth 1 (latest commit only)
argocd repo add https://github.com/myorg/large-repo \
  --username myuser \
  --password mypassword \
  --depth 1
# Depth 50 (last 50 commits)
argocd repo add https://github.com/myorg/large-repo \
  --username myuser \
  --password mypassword \
  --depth 50

Declarative Secret

apiVersion: v1
kind: Secret
metadata:
  name: large-repo-secret
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/myorg/large-repo
  username: myuser
  password: mytoken
  depth: "1"

When to Use

Repositories with large history.

High‑frequency monorepos where only recent changes matter.

CI pipelines that need fast checkout of the latest state.

When to Avoid

Compliance environments requiring full history.

Projects heavily using Git submodules.

Complex branching strategies that rely on older commits.

KEDA Integration

Argo CD 3.3 can manage KEDA ScaledObject and ScaledJob resources directly from the UI.

ScaledObject Example

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: api-scaler
  annotations:
    argocd.argoproj.io/sync-options: Prune=false
spec:
  scaleTargetRef:
    name: api-deployment
  minReplicaCount: 2
  maxReplicaCount: 100
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://prometheus:9090
      metricName: http_requests_per_second
      threshold: '100'
      query: sum(rate(http_requests_total[2m]))

Users can pause or resume these objects during maintenance, debugging, or cost‑saving windows.

ScaledJob Health Reporting

apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
  name: batch-processor
spec:
  jobTargetRef:
    template:
      spec:
        containers:
        - name: processor
          image: batch-processor:v1.2.0
        restartPolicy: OnFailure
  pollingInterval: 30
  maxReplicaCount: 10
  successfulJobsHistoryLimit: 5
  failedJobsHistoryLimit: 5
  triggers:
  - type: rabbitmq
    metadata:
      queueName: tasks
      queueLength: '10'
      hostFromEnv: RABBITMQ_HOST

Upgrade to Argo CD 3.3

Prerequisites & Planning

Argo CD supports only the three most recent minor versions; 2.14 is EOL.

If on 2.x, upgrade first to 3.0, then to 3.3.

Server‑Side Apply (SSA) must be enabled to avoid ApplicationSet annotation size limits.

Step‑by‑Step Upgrade

Backup current configuration :

# Backup all Argo CD resources
kubectl get applications -n argocd -o yaml > argocd-apps-backup.yaml
kubectl get applicationsets -n argocd -o yaml > argocd-appsets-backup.yaml
kubectl get configmaps -n argocd -o yaml > argocd-config-backup.yaml
kubectl get secrets -n argocd -o yaml > argocd-secrets-backup.yaml
# Backup Redis data
kubectl exec -n argocd argocd-redis-0 -- redis-cli SAVE
kubectl cp argocd/argocd-redis-0:/data/dump.rdb ./redis-backup.rdb

Update Argo CD manifests (self‑managed mode) :

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argo-cd
    targetRevision: v3.3.0
    path: manifests/cluster-install
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - ServerSideApply=true
    - ClientSideApplyMigration=false

Apply with SSA :

kubectl apply --server-side --force-conflicts \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/v3.3.0/manifests/install.yaml

Update bundled tool versions (e.g., Helm 3.19.2, Kustomize 5.8.0) and set new environment variables:

ARGOCD_K8S_SERVER_SIDE_TIMEOUT="60s"
ARGOCD_K8S_TCP_TIMEOUT="30s"

Post‑upgrade verification :

# Verify pods
kubectl get pods -n argocd
# Check version
argocd version
# List applications and health
argocd app list
argocd app list --output json | jq '.[] | select(.status.health.status != "Healthy") | .metadata.name'
# Tail server logs
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server --tail=100 -f

Best Practices for 3.3

Design PreDelete hooks with proper error handling and reasonable activeDeadlineSeconds to avoid runaway jobs.

Use inline parameters in Source Hydrator to keep configuration out of the repo and simplify environment‑specific overrides.

Set refreshTokenThreshold to balance security and user experience (e.g., 300 s).

Start with depth: "1" for shallow clones; increase only for debugging.

Pause KEDA ScaledObjects during maintenance windows to prevent unintended scaling.

Core Takeaways

Plan the upgrade carefully, enable Server‑Side Apply, and back up all resources.

Test PreDelete hooks in a non‑production environment before enabling them.

Update any automation that relied on hydrated commits to read the new Git notes.

Leverage OIDC token refresh and shallow clones to improve UX and performance.

Stay on the latest supported minor release to receive security updates and new features.

CI/CDGitOpsArgo CDKEDAShallow CloneOIDCPreDelete Hook
DevOps Coach
Written by

DevOps Coach

Master DevOps precisely and progressively.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.