Why We Dropped Jenkins for Tekton & ArgoCD: A Complete Migration Blueprint
This guide explains the shortcomings of Jenkins, outlines the core GitOps principles, details the selection of Tekton, ArgoCD, Harbor, and Kyverno, and provides step‑by‑step configurations, pipelines, and best‑practice recommendations for a production‑grade migration to a cloud‑native CI/CD platform.
Overview
The organization replaced a resource‑heavy Jenkins setup with a cloud‑native CI/CD stack based on Tekton for pipelines and ArgoCD for continuous delivery, following GitOps principles.
Why Jenkins Was Abandoned
Severe resource waste – the Jenkins master consumed 16 GB RAM while idle.
Plugin hell – hundreds of plugins caused version conflicts and security issues.
Incomplete configuration‑as‑code – JCasC covered only part of the configuration.
Poor scalability – even with the Kubernetes plugin, agent scheduling was inefficient.
Outdated UI – Blue Ocean is no longer maintained, raising the learning curve for new users.
GitOps Core Concepts
GitOps is more than storing configuration in Git; it enforces a declarative, versioned, automated, and continuously reconciled state for the entire system.
Technical Stack Selection
CI Engine: Tekton – chosen for its cloud‑native design and strong community support.
CD Engine: ArgoCD – provides a UI‑driven, Git‑driven deployment model.
Image Registry: Harbor – offers vulnerability scanning and image signing.
Artifact Management: Helm charts stored in OCI format.
Secret Management: External Secrets – integrates with cloud KMS.
Policy Engine: Kyverno – for Kubernetes policy enforcement.
Environment Requirements
Kubernetes: 1.28.4
Storage: Rook‑Ceph (PVC RWX)
Network: Cilium + Ingress‑NGINX
Certificates: cert‑manager + Let’s EncryptKey component versions include Tekton Pipelines 0.56.0, Tekton Triggers 0.27.0, ArgoCD 2.10.1, and Harbor 2.10.0.
Tekton Installation
# Install Tekton Pipelines
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.56.0/release.yaml
# Install Tekton Triggers
kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.27.0/release.yaml
kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.27.0/interceptors.yaml
# Install Tekton Dashboard
kubectl apply -f https://storage.googleapis.com/tekton-releases/dashboard/previous/v0.43.0/release.yaml
# Wait for all components to be ready
kubectl wait --for=condition=available --timeout=300s deployment --all -n tekton-pipelinesTekton Configuration Optimizations
# Enable larger result size and step‑level resources
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
namespace: tekton-pipelines
data:
enable-api-fields: "beta"
max-result-size: "10485760"
enable-step-actions: "true"
running-in-environment-with-injected-sidecars: "true"
coschedule: "pipelineruns"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
data:
default-service-account: "tekton-build"
default-timeout-minutes: "60"
default-pod-template: |
nodeSelector:
node-role.kubernetes.io/build: "true"
tolerations:
- key: "build"
operator: "Equal"
value: "true"
effect: "NoSchedule"
securityContext:
fsGroup: 65532ServiceAccount and Secrets
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-build
namespace: tekton-pipelines
secrets:
- name: docker-credentials
- name: git-credentials
---
apiVersion: v1
kind: Secret
metadata:
name: docker-credentials
namespace: tekton-pipelines
annotations:
tekton.dev/docker-0: https://harbor.internal.company.com
type: kubernetes.io/basic-auth
stringData:
username: robot$tekton-builder
password: "${HARBOR_ROBOT_PASSWORD}"
---
apiVersion: v1
kind: Secret
metadata:
name: git-credentials
namespace: tekton-pipelines
annotations:
tekton.dev/git-0: https://gitlab.internal.company.com
type: kubernetes.io/basic-auth
stringData:
username: tekton-ci
password: "${GITLAB_ACCESS_TOKEN}"Complete Pipeline Example
The pipeline consists of cloning the repository, running unit tests, building a container image with Kaniko, scanning the image with Trivy, packaging a Helm chart, and finally updating the GitOps manifests.
# tasks/git-clone.yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: git-clone
namespace: tekton-pipelines
spec:
params:
- name: url
description: Repository URL
type: string
- name: revision
description: Git revision (branch, tag, or commit SHA)
type: string
default: "main"
- name: depth
description: Clone depth (0 for full clone)
type: string
default: "1"
workspaces:
- name: output
description: Workspace to clone into
steps:
- name: clone
image: harbor.internal.company.com/library/git:2.43.0
script: |
#!/usr/bin/env sh
set -eu
git clone --depth="${params.depth}" --branch="${params.revision}" "${params.url}" "${workspaces.output.path}/source"
# Optional submodule handling omitted for brevity # tasks/kaniko-build.yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: kaniko-build
spec:
params:
- name: image
type: string
- name: dockerfile
type: string
default: "./Dockerfile"
workspaces:
- name: source
- name: dockerconfig
steps:
- name: build-and-push
image: harbor.internal.company.com/library/kaniko-executor:v1.21.1
args:
- --dockerfile=$(params.dockerfile)
- --context=$(workspaces.source.path)/source
- --destination=$(params.image)
- --cache=true
- --cache-repo=$(params.image)-cache
- --snapshot-mode=redo
- --use-new-run
- --compressed-caching=false
- --cleanup
env:
- name: DOCKER_CONFIG
value: /kaniko/.docker # pipelines/build-and-deploy.yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
params:
- name: git-url
type: string
- name: git-revision
type: string
default: "main"
- name: image-name
type: string
- name: image-tag
type: string
- name: chart-path
type: string
default: "deploy/helm"
- name: environment
type: string
default: "dev"
workspaces:
- name: shared-workspace
- name: docker-config
- name: ssh-creds
optional: true
tasks:
- name: clone
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
workspaces:
- name: output
workspace: shared-workspace
- name: unit-test
runAfter: [clone]
taskSpec:
workspaces:
- name: source
steps:
- name: test
image: harbor.internal.company.com/library/golang:1.22
script: |
#!/bin/bash
go test -v -race -coverprofile=coverage.out ./...
go tool cover -func=coverage.out
- name: build
runAfter: [unit-test]
taskRef:
name: kaniko-build
params:
- name: image
value: harbor.internal.company.com/$(params.image-name):$(params.image-tag)
- name: scan
runAfter: [build]
taskRef:
name: trivy-scan
params:
- name: image
value: harbor.internal.company.com/$(params.image-name):$(params.image-tag)
- name: helm-package
runAfter: [scan]
taskRef:
name: helm-package
params:
- name: chart-path
value: $(params.chart-path)
- name: version
value: $(params.image-tag)
- name: update-manifest
runAfter: [helm-package]
taskSpec:
params:
- name: image-tag
type: string
- name: image-name
type: string
- name: environment
type: string
steps:
- name: update-values
image: harbor.internal.company.com/library/git:2.43.0
script: |
#!/bin/bash
git clone https://gitlab.internal.company.com/platform/gitops-manifests.git /tmp/gitops
yq e -i ".image.tag = \"${params.image-tag}\"" "environments/${params.environment}/${params.image-name}/values.yaml"
git -C /tmp/gitops add .
git -C /tmp/gitops commit -m "chore(${params.environment}): update ${params.image-name} to ${params.image-tag}"
git -C /tmp/gitops push origin mainArgoCD Installation and Tuning
# Create namespace and install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.10.1/manifests/install.yaml
# Wait for all components
kubectl wait --for=condition=available --timeout=300s deployment --all -n argocdKey ConfigMap customizations include increasing controller timeouts, defining repository credentials, enabling OIDC with Keycloak, and adjusting resource limits for high availability.
# argocd-cm.yaml (excerpt)
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
timeout.reconciliation: 180s
repositories: |
- url: https://gitlab.internal.company.com/platform/gitops-manifests.git
type: git
usernameSecret:
name: repo-creds
key: username
passwordSecret:
name: repo-creds
key: password
- url: harbor.internal.company.com
type: helm
name: harbor
enableOCI: "true"
usernameSecret:
name: harbor-creds
key: username
passwordSecret:
name: harbor-creds
key: password
oidc.config: |
name: Keycloak
issuer: https://sso.internal.company.com/realms/company
clientID: argocd
clientSecret: $oidc.keycloak.clientSecret
requestedScopes:
- openid
- profile
- email
- groupsGitOps Workflow
The repository layout follows a clear separation of apps, environments, infrastructure, and policies. Kustomize bases define reusable manifests, while overlays apply environment‑specific patches. ApplicationSets generate per‑service, per‑environment ArgoCD Application resources, enabling matrix‑style multi‑environment deployments.
# applicationsets/multi-env-apps.yaml (excerpt)
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: microservices
namespace: argocd
spec:
generators:
- matrix:
generators:
- list:
elements:
- service: order-service
path: apps/order-service
- service: inventory-service
path: apps/inventory-service
- list:
elements:
- env: dev
namespace: dev
syncPolicy: automated
- env: staging
namespace: staging
syncPolicy: automated
- env: production
namespace: production
syncPolicy: manual
template:
metadata:
name: '{{service}}-{{env}}'
labels:
app: '{{service}}'
env: '{{env}}'
spec:
project: default
source:
repoURL: https://gitlab.internal.company.com/platform/gitops-manifests.git
targetRevision: main
path: '{{path}}/overlays/{{env}}'
destination:
server: '{{cluster}}'
namespace: '{{namespace}}'
syncPolicy:
automated:
prune: true
selfHeal: trueObservability and Alerting
Prometheus rules monitor Tekton pipeline failures, long‑running pipelines, and ArgoCD sync issues. Alerts are routed to Slack via a webhook.
# PrometheusRule example (excerpt)
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: cicd-alerts
namespace: monitoring
spec:
groups:
- name: tekton
rules:
- alert: TektonPipelineRunFailed
expr: sum(tekton_pipelinerun_count{status="failed"}) by (pipeline, namespace) > 0
for: 1m
labels:
severity: warning
annotations:
summary: "Tekton pipeline run failed"
description: "Pipeline {{ $labels.pipeline }} in {{ $labels.namespace }} has failed"
- name: argocd
rules:
- alert: ArgoCDApplicationOutOfSync
expr: argocd_app_info{sync_status!="Synced"} == 1
for: 30m
labels:
severity: warning
annotations:
summary: "ArgoCD application out of sync"
description: "Application {{ $labels.name }} is out of sync for 30 minutes"Best Practices & Gotchas
Modularize Tekton tasks and reuse them via TaskRef.
Cache dependencies with PVCs (e.g., Maven, npm) to speed up builds.
Enable ArgoCD repo‑server replicas and Redis HA for high availability.
Use ApplicationSets instead of individual Applications to reduce management overhead.
Never enable automatic prune in production environments.
Set appropriate resource requests/limits for Tekton pods and ArgoCD components.
Leverage Kyverno policies to enforce image registry restrictions and security contexts.
Conclusion
After migrating from Jenkins to a Tekton + ArgoCD stack, the team achieved a 50 % reduction in build time, better resource utilization, automated self‑healing, full auditability via Git, and increased deployment frequency from a few times per week to multiple times per day. The migration was performed incrementally over three months, emphasizing the importance of progressive rollout, thorough testing, and robust monitoring.
Ops Community
A leading IT operations community where professionals share and grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
