Build a Custom Kubernetes Controller from Scratch: Init, Code, Docker, Helm
This step‑by‑step guide walks you through initializing a Kubernetes controller project with kubebuilder, writing the Reconcile logic, debugging and performance monitoring, building and pushing Docker images, and deploying the controller using Helm charts, while also covering metrics collection, RBAC configuration, and best practices for cloud‑native workloads.
01 Initialize Project
Use kubebuilder to create a new controller project:
<code>$ go version
go version go1.23.0 linux/amd64
$ mkdir lb-layer7
$ cd lb-layer7
$ go mod init lb-layer7
$ kubebuilder version
Version: main.version{KubeBuilderVersion:"4.2.0", KubernetesVendor:"1.31.0", GitCommit:"c7cde5172dc8271267dbf2899e65ef6f9d30f91e", BuildDate:"2024-08-17T09:41:45Z", GoOs:"linux", GoArch:"amd64"}
$ kubebuilder init --domain k8s.qihoo.net
# Do not create a resource, use the standard Ingress resource
$ kubebuilder create api --group network --version v1alpha1 --kind Ingress
INFO Create Resource [y/n]
n
INFO Create Controller [y/n]
y</code>02 Core Code Implementation
Write the main entry point ( cmd/main.go ) and the controller logic ( internal/controller/ingress_controller.go ).
<code>// cmd/main.go (excerpt)
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8888", "The address the metrics endpoint binds to. Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.BoolVar(&secureMetrics, "metrics-secure", false, "If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
metrics.InitMetrics()
if err = (&controller.IngressReconciler{Client: mgr.GetClient(), Scheme: mgr.GetScheme(), Recorder: mgr.GetEventRecorderFor("Layer7Reconciler")}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Ingress")
os.Exit(1)
}
</code> <code>// internal/controller/ingress_controller.go (excerpt)
type IngressReconciler struct {
client.Client
Scheme *runtime.Scheme
layer7Client *k8scloud.Layer7
Recorder record.EventRecorder
}
func (r *IngressReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
log.Info("reconcile starting ...")
metrics.ReconcileTotal.WithLabelValues(metrics.Reconcile).Inc()
startTime := time.Now()
defer func() {
metrics.ReconcileDealTime.WithLabelValues(req.Namespace, req.Name).Observe(utils.GetElapsedTime(startTime))
log.Info("Reconcile done", "ElapsedTime", utils.GetElapsedTime(startTime))
}()
ingress := &networkingv1.Ingress{}
if err := r.Get(ctx, req.NamespacedName, ingress); err != nil {
if errors.IsNotFound(err) { return ctrl.Result{}, nil }
return ctrl.Result{}, err
}
if r.Recorder == nil {
log.Error(fmt.Errorf("event recorder not initialized"), "failed to record event")
return ctrl.Result{}, fmt.Errorf("event recorder not initialized")
}
if ingress.Spec.IngressClassName != nil && *ingress.Spec.IngressClassName == "layer7" {
log.Info(fmt.Sprintf("Processing Ingress with custom IngressClassName: %s", *ingress.Spec.IngressClassName))
for _, rule := range ingress.Spec.Rules {
host := rule.Host
for _, path := range rule.HTTP.Paths {
serviceName := path.Backend.Service.Name
serviceNamespace := ingress.Namespace
var podList corev1.PodList
if err := r.List(ctx, &podList, client.InNamespace(serviceNamespace), client.MatchingLabels(svc.Spec.Selector)); err != nil { continue }
var podIPs []string
for _, pod := range podList.Items {
if pod.Status.Phase == corev1.PodRunning { podIPs = append(podIPs, pod.Status.PodIP) }
}
r.Recorder.Event(ingress, corev1.EventTypeWarning, "Listing Pods", "success to get current pods list !")
}
}
}
return ctrl.Result{}, nil
}
</code>03 Debugging and Verification
Configure VSCode launch.json to set KUBECONFIG and run the controller in debug mode.
<code>{
"version": "0.2.0",
"configurations": [
{
"name": "layer7",
"type": "go",
"request": "launch",
"mode": "debug",
"program": "${workspaceFolder}/cmd/main.go",
"cwd": "${workspaceFolder}/cmd",
"env": { "KUBECONFIG": "/home/xxx/.kube/private-kube-config.conf" }
}
]
}
</code>Create a test yaml containing Deployment, Service and Ingress resources.
<code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: lb-layer7-demo
name: lb-layer7-nginx-test-01
labels:
app: lb-layer7-nginx-test-01
spec:
replicas: 1
selector:
matchLabels:
app: lb-layer7-nginx-test-01
template:
metadata:
labels:
app: lb-layer7-nginx-test-01
spec:
containers:
- name: nginx
image: mirror.k8s.qihoo.net/docker/nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: lb-layer7-demo
name: lb-layer7-nginx-test-01
labels:
app: lb-layer7-nginx-test-01
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: lb-layer7-nginx-test-01
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb-layer7-nginx-test
namespace: lb-layer7-demo
spec:
ingressClassName: layer7
rules:
- host: lb.layer7.test.com
http:
paths:
- backend:
service:
name: lb-layer7-nginx-test-01
port:
number: 80
path: /
pathType: Prefix
</code>Apply the resources and verify:
<code>$ kubectl create ns lb-layer7-demo
namespace/lb-layer7-demo created
$ kubectl apply -f lb-layer7.yaml
deployment.apps/lb-layer7-nginx-test-01 created
service/lb-layer7-nginx-test-01 created
ingress.networking.k8s.io/lb-layer7-nginx-test created
$ kubectl get deploy,svc,ing
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/lb-layer7-nginx-test-01 1/1 1 1 2m26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/lb-layer7-nginx-test-01 ClusterIP 172.24.105.41 <none> 80/TCP 2m26s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/lb-layer7-nginx-test layer7 lb.layer7.test.com 80 2m26s
</code>Debugging screenshot:
04 Image Build and Push
Modify Dockerfile to use vendor mode and a smaller base image.
<code># Build the manager binary
FROM swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/golang:1.22.5 AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /workspace
COPY . .
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -mod=vendor -a -o manager cmd/main.go
# Use a minimal base image
FROM swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
USER 65532:65532
ENTRYPOINT ["/manager"]
</code>Build, tag, and push the image:
<code>$ go mod vendor
$ sudo docker build -t lb-controller/lb-layer7:v0.0.1 .
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
lb-controller/lb-layer7 v0.0.1 da64931a01d2 8 seconds ago 74.7MB
$ sudo docker tag lb-controller/lb-layer7:v0.0.1 harbor.qihoo.net/lb-controller/lb-layer7:v0.0.1
$ docker login harbor.qihoo.net/lb-controller
$ docker push harbor.qihoo.net/lb-controller/lb-layer7:v0.0.1
</code>05 Build Helm Chart
Create a Helm chart and adjust Chart.yaml and values.yaml :
<code># Chart.yaml
apiVersion: v2
name: lb-layer7
description: A Helm chart for Kubernetes
type: application
version: 0.0.1
appVersion: "v0.0.1"
</code> <code># values.yaml
replicaCount: 1
image:
repository: harbor.qihoo.net/lb-controller/lb-layer7
tag: v0.0.1
pullPolicy: IfNotPresent
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 250m
memory: 128Mi
</code>Deployment template ( templates/deployment.yaml ) uses the values:
<code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
</code>06 Helm Debug, Install & Package Commands
<code># Render chart templates
helm template --debug ./lb-layer7
# Dry‑run install / upgrade
helm install --debug --dry-run lb-layer7 ./lb-layer7 -n lb-layer7-system
# Install
helm install lb-layer7 ./lb-layer7
# Check status
helm status lb-layer7
# Get full manifest
helm get manifest lb-layer7
# List installed charts
helm list -A
# Package chart
helm package lb-layer7
# Push to Harbor chart repository
helm push lb-layer7-0.0.1.tgz https://harbor.qihoo.net/chartrepo/xxx
# Uninstall
helm uninstall lb-layer7
</code>07 Controller Deployment Considerations
Handle private registry image pull secrets:
<code># Create secret for private registry
kubectl create secret docker-registry harbor-360 --namespace=lb-layer7-system \
--docker-server="https://harbor.qihoo.net" \
--docker-username=xxx \
--docker-password=xxx
# Patch deployment to add imagePullSecrets
kubectl patch deploy lb-layer7 -n lb-layer7-system --type='json' -p='[ {"op": "add", "path": "/spec/template/spec/imagePullSecrets", "value": [{"name": "harbor-360"}]} ]'
# Set service account
kubectl patch deploy lb-layer7 -n lb-layer7-system --type='json' -p='[ {"op": "replace", "path": "/spec/template/spec/serviceAccountName", "value": "lb-layer7"} ]'
</code>Create sa.yaml with ServiceAccount, Secret, ClusterRole, and ClusterRoleBinding:
<code>apiVersion: v1
kind: ServiceAccount
metadata:
name: lb-layer7
namespace: lb-layer7-system
---
apiVersion: v1
kind: Secret
metadata:
name: lb-layer7-secret
annotations:
kubernetes.io/service-account.name: "lb-layer7"
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lb-layer7
rules:
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: lb-layer7-binding
subjects:
- kind: ServiceAccount
name: lb-layer7
namespace: lb-layer7-system
roleRef:
kind: ClusterRole
name: lb-layer7
apiGroup: rbac.authorization.k8s.io
</code>Apply and verify permissions:
<code>$ kubectl apply -f sa.yaml
# Verify Ingress permission
kubectl auth can-i watch ingresses.networking.k8s.io --as=system:serviceaccount:lb-layer7-system:lb-layer7 --all-namespaces
# Verify Service permission
kubectl auth can-i list services --as=system:serviceaccount:lb-layer7-system:lb-layer7 --all-namespaces
# Verify Pods permission
kubectl auth can-i list pods --as=system:serviceaccount:lb-layer7-system:lb-layer7 --all-namespaces
</code>08 Summary
This tutorial covered the complete workflow from environment setup to Helm deployment for a custom Kubernetes controller. You should now be able to:
Use kubebuilder to scaffold a controller.
Write Reconcile logic that watches resources and interacts with Pods.
Collect and expose Prometheus metrics for performance monitoring.
Build a vendor‑mode Docker image, push it to a private registry, and deploy it with Helm.
Configure RBAC and image pull secrets for secure operation.
Mastering Kubernetes controller development enables automated cluster management, improves operational efficiency, and deepens your understanding of cloud‑native architectures.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.