Master Kubernetes Operators: Build, Deploy, and Test Your First Operator with Kubebuilder
This tutorial walks you through the concepts of Kubernetes Operators, explains how to set up a development environment, create a simple Foo operator using Kubebuilder, explore its project structure, implement CRDs and controllers in Go, and test the operator end‑to‑end on a local cluster.
What is an Operator?
An Operator extends Kubernetes by embedding custom business logic as code, enabling automation of tasks that native resources cannot perform, such as managing MySQL, Elasticsearch, or GitLab Runner instances.
Building an Operator
We use the Kubebuilder framework (built on controller‑runtime) to simplify development.
Go v1.17.9+
Docker 17.03+
kubectl v1.11.3+
A Kubernetes v1.11.3+ cluster (Kind is recommended for local testing)
curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH) && chmod +x kubebuilder && mv kubebuilder /usr/local/bin/ kubebuilder version
Version: main.version{KubeBuilderVersion:"3.4.1", KubernetesVendor:"1.23.5", GitCommit:"d59d7882ce95ce5de10238e135ddff31d8ede026", BuildDate:"2022-05-06T13:58:56Z", GoOs:"darwin", GoArch:"amd64"}Project Structure
The generated Go project contains main.go (manager entry point), config/ (Kubernetes manifests), and a Dockerfile for building the manager image.
CRD and Controller
A Kubernetes Operator consists of a Custom Resource Definition (CRD) that describes a new API type and a controller that watches resources and reconciles the actual state to the desired state.
Creating the API
kubebuilder init --domain my.domain --repo my.domain/tutorial kubebuilder create api --group tutorial --version v1 --kind Foo
Create Resource [y/n] y
Create Controller [y/n] yCRD Definition (Go)
package v1
import ("k8s.io/apimachinery/pkg/apis/meta/v1")
type FooSpec struct { Name string `json:"name"` }
type FooStatus struct { Happy bool `json:"happy,omitempty"` }
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
type Foo struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec FooSpec `json:"spec,omitempty"` Status FooStatus `json:"status,omitempty"` }
//+kubebuilder:object:root=true
type FooList struct { metav1.TypeMeta `json:",inline"` metav1.ListMeta `json:"metadata,omitempty"` Items []Foo `json:"items"` }
func init() { SchemeBuilder.Register(&Foo{}, &FooList{}) }Controller Logic (Go)
package controllers
import (
"context"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
tutorialv1 "my.domain/tutorial/api/v1"
)
type FooReconciler struct { client.Client; Scheme *runtime.Scheme }
//+kubebuilder:rbac:groups=tutorial.my.domain,resources=foos,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=tutorial.my.domain,resources=foos/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=tutorial.my.domain,resources=foos/finalizers,verbs=update
//+kubebuilder:rbac:groups="",resources=pods,verbs=get;list;watch
func (r *FooReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
log.Info("reconciling foo custom resource")
var foo tutorialv1.Foo
if err := r.Get(ctx, req.NamespacedName, &foo); err != nil {
log.Error(err, "unable to fetch Foo")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
var podList corev1.PodList
friendFound := false
if err := r.List(ctx, &podList); err == nil {
for _, p := range podList.Items {
if p.GetName() == foo.Spec.Name {
log.Info("pod linked to a foo custom resource found", "name", p.GetName())
friendFound = true
}
}
}
foo.Status.Happy = friendFound
if err := r.Status().Update(ctx, &foo); err != nil {
log.Error(err, "unable to update foo's happy status")
return ctrl.Result{}, err
}
log.Info("foo's happy status updated", "status", friendFound)
return ctrl.Result{}, nil
}
func (r *FooReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&tutorialv1.Foo{}).
Watches(&source.Kind{Type: &corev1.Pod{}}, handler.EnqueueRequestsFromMapFunc(r.mapPodsReqToFooReq)).
Complete(r)
}
func (r *FooReconciler) mapPodsReqToFooReq(obj client.Object) []reconcile.Request {
var list tutorialv1.FooList
reqs := []reconcile.Request{}
if err := r.Client.List(context.TODO(), &list); err == nil {
for _, item := range list.Items {
if item.Spec.Name == obj.GetName() {
reqs = append(reqs, reconcile.Request{NamespacedName: types.NamespacedName{Name: item.Name, Namespace: item.Namespace}})
}
}
}
return reqs
}Testing the Operator
Generate manifests, install the CRD, and run the manager:
make manifests
kubectl apply -k config/crd
make runCreate two Foo resources:
apiVersion: tutorial.my.domain/v1
kind: Foo
metadata:
name: foo-01
spec:
name: jack
---
apiVersion: tutorial.my.domain/v1
kind: Foo
metadata:
name: foo-02
spec:
name: joe kubectl apply -f config/samplesDeploy a Pod named jack to trigger the reconciliation loop:
apiVersion: v1
kind: Pod
metadata:
name: jack
spec:
containers:
- name: ubuntu
image: ubuntu:latest
command: ["sleep"]
args: ["infinity"] kubectl apply -f jack-pod.yamlObserve that the happy status of the matching Foo resource becomes true. Updating the second Foo’s name to jack also sets its status to true. Deleting the Pod resets the status to false, confirming the operator works as intended.
Further Work
Possible improvements include event filtering, refined RBAC, enhanced logging, emitting Kubernetes events on updates, adding custom status fields, and writing unit and end‑to‑end tests.
Linux Cloud Computing Practice
Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
