Deploying Apache Zookeeper on Kubernetes with StatefulSet, Operator, and KUDO
This guide demonstrates how to deploy Apache Zookeeper on Kubernetes using three approaches—StatefulSet, a custom Operator, and KUDO—covering configuration files, command‑line steps, scaling procedures, and verification, enabling elastic, high‑availability service discovery in cloud‑native environments.
With the rise of cloud‑native trends, foundational components such as Apache Zookeeper need to run on Kubernetes. Zookeeper serves as a registration center in micro‑service architectures.
Running a Zookeeper cluster on Kubernetes brings elasticity, scaling and high‑availability.
Deploy Zookeeper with a StatefulSet
The official method creates a headless service, a cluster service, a PodDisruptionBudget and a StatefulSet.
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10GiApply the configuration with kubectl apply and verify the pods and services are created.
Scaling is simple: edit the StatefulSet replica count and adjust the server count, then the rolling update expands the cluster.
Deploy Zookeeper with a Kubernetes Operator
First create the custom resource definition ZookeeperCluster, then set up RBAC, deploy the operator, and create a CR describing the desired Zookeeper cluster.
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper
spec:
replicas: 3
image:
repository: pravega/zookeeper
tag: 0.2.9
storageType: persistence
persistence:
reclaimPolicy: Delete
spec:
storageClassName: "rbd"
resources:
requests:
storage: 8GiApply the CR and the operator creates the Zookeeper pods; scaling is done by patching the CR.
kubectl patch zk zookeeper --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":4}]'Deploy Zookeeper with KUDO
KUDO is a framework for building Kubernetes operators. Install KUDO, initialise it, then install the built‑in Zookeeper operator, specifying a storage class.
brew install kudo
kubectl kudo init
kubectl kudo install zookeeper --instance=zookeeper-instance -p STORAGE_CLASS=rbdScaling is performed by updating the instance parameter:
kubectl kudo update --instance=zookeeper-instance -p NODE_COUNT=4Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.