Cloud Native 12 min read

Deploy a Redis Cluster on Kubernetes with StatefulSet and Headless Service

Learn how to deploy a Redis cluster on Kubernetes by using a StatefulSet with a headless Service, configuring persistent storage, creating ConfigMaps, initializing the cluster with redis-cli, and exposing it via a regular Service, complete with step‑by‑step commands and YAML examples.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Deploy a Redis Cluster on Kubernetes with StatefulSet and Headless Service

Today we try to deploy a Redis cluster in Kubernetes, learning more about Kubernetes details and features.

Note: background knowledge about redis-cluster is omitted.

1. Problem Analysis

Essentially, deploying a Redis cluster on Kubernetes is not much different from deploying a regular application, but several issues need attention:

Redis is a stateful application When Redis is deployed as pods, each pod has different cached data and its IP may change, so using a normal Deployment and Service will cause problems. Instead, use a StatefulSet combined with a Headless Service.

Data persistence Although Redis is memory‑based, it relies on disk for persistence. In a cluster, a shared file system plus PersistentVolume (PV) is needed so all pods can share the same persistent storage.

2. Concept Introduction

Before starting, let's introduce a few concepts and principles.

1. Headless Service

A Headless Service is a Service without a Cluster IP. In Kubernetes DNS, it resolves to the list of IPs of all associated Pods instead of a single Cluster IP.

2. StatefulSet

StatefulSet

is a Kubernetes resource designed for stateful applications. It can be seen as a variant of Deployment/RC with the following characteristics:

Each Pod managed by a StatefulSet has a unique stable network identity generated in a predictable order (e.g., redis‑0, redis‑1).

The start/stop order of Pods is strictly controlled; the Nth Pod is created only after the previous N‑1 Pods are ready.

Pods use stable persistent storage, and the associated PV is not deleted when the Pod is removed.

StatefulSet must be used together with a Headless Service, which adds a DNS layer that maps each Pod to a distinct hostname of the form $(podname).$(headless service name).

$(podname).$(headless service name)

3. Solution

Using StatefulSet and Headless Service, the deployment design is as follows:

Configuration steps are roughly listed below:

Configure a shared NFS file system.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC).

Create a ConfigMap.

Create a Headless Service.

Create a StatefulSet.

Initialize the Redis cluster.

4. Actual Operation

To simplify, this example uses a regular Volume instead of PV/PVC.

1. Create ConfigMap

First create the redis.conf configuration file:

appendonly yes<br/>cluster-enabled yes<br/>cluster-config-file /var/lib/redis/nodes.conf<br/>cluster-node-timeout 5000<br/>dir /var/lib/redis<br/>port 6379

Then create the ConfigMap:

kubectl create configmap redis-conf --from-file=redis.conf

2. Create Headless Service

apiVersion: v1<br/>kind: Service<br/>metadata:<br/>  name: redis-service<br/>  labels:<br/>    app: redis<br/>spec:<br/>  ports:<br/>  - name: redis-port<br/>    port: 6379<br/>  clusterIP: None<br/>  selector:<br/>    app: redis<br/>    appCluster: redis-cluster

3. Create StatefulSet

apiVersion: apps/v1beta1<br/>kind: StatefulSet<br/>metadata:<br/>  name: redis-app<br/>spec:<br/>  serviceName: "redis-service"<br/>  replicas: 6<br/>  template:<br/>    metadata:<br/>      labels:<br/>        app: redis<br/>        appCluster: redis-cluster<br/>    spec:<br/>      terminationGracePeriodSeconds: 20<br/>      affinity:<br/>        podAntiAffinity:<br/>          preferredDuringSchedulingIgnoredDuringExecution:<br/>          - weight: 100<br/>            podAffinityTerm:<br/>              labelSelector:<br/>                matchExpressions:<br/>                - key: app<br/>                  operator: In<br/>                  values:<br/>                  - redis<br/>              topologyKey: kubernetes.io/hostname<br/>      containers:<br/>      - name: redis<br/>        image: "registry.cn-qingdao.aliyuncs.com/gold-faas/gold-redis:1.0"<br/>        command:<br/>          - "redis-server"<br/>        args:<br/>          - "/etc/redis/redis.conf"<br/>          - "--protected-mode"<br/>          - "no"<br/>        resources:<br/>          requests:<br/>            cpu: "100m"<br/>            memory: "100Mi"<br/>        ports:<br/>            - name: redis<br/>              containerPort: 6379<br/>              protocol: "TCP"<br/>            - name: cluster<br/>              containerPort: 16379<br/>              protocol: "TCP"<br/>        volumeMounts:<br/>          - name: "redis-conf"
            mountPath: "/etc/redis"
          - name: "redis-data"
            mountPath: "/var/lib/redis"
>      volumes:<br/>      - name: "redis-conf"
        configMap:
          name: "redis-conf"
          items:
            - key: "redis.conf"
              path: "redis.conf"
      - name: "redis-data"
        emptyDir: {}

4. Initialize Redis Cluster

After the StatefulSet is created, six pods are running but the cluster is not yet initialized. Use the official redis-trib tool (now integrated into redis-cli) to set up the cluster.

Create a management pod:

kubectl run -i --tty redis-cluster-manager --image=ubuntu --restart=Never /bin/bash

Inside the pod, install tools and compile Redis:

wget http://download.redis.io/releases/redis-5.0.3.tar.gz<br/>tar -xvzf redis-5.0.3.tar.gz<br/>cd redis-5.0.3.tar.gz && make

Move the compiled redis-cli to /usr/local/bin for convenience.

Obtain the IPs of the six pods using nslookup and the StatefulSet DNS pattern, e.g.:

nslookup redis-app-0.redis-service<br/># returns 172.17.0.10

Initialize the three master nodes (0,1,2):

redis-cli --cluster create 172.17.0.10:6379 172.17.0.11:6379 172.17.0.12:6379

Then add the slave nodes (3,4,5) using the master IDs returned from the previous step:

redis-cli --cluster add-node 172.17.0.13:6379 172.17.0.10:6379 --cluster-slave --cluster-master-id adf443a4d33c4db2c0d4669d61915ae6faa96b46<br/>redis-cli --cluster add-node 172.17.0.14:6379 172.17.0.11:6379 --cluster-slave --cluster-master-id 6e5adcb56a871a3d78343a38fcdec67be7ae98f8<br/>redis-cli --cluster add-node 172.17.0.16:6379 172.17.0.12:6379 --cluster-slave --cluster-master-id c061e37c5052c22f056fff2a014a9f63c3f47ca0

Verify the cluster status by connecting to any node with the -c flag:

redis-cli -c -h 172.17.0.10 -p 6379 cluster info

5. Create Service

To expose the cluster to other services, create a regular Service (different from the headless one):

apiVersion: v1<br/>kind: Service<br/>metadata:<br/>  name: gold-redis<br/>  labels:<br/>    app: redis<br/>spec:<br/>  ports:<br/>  - name: redis-port<br/>    protocol: "TCP"
    port: 6379<br/>    targetPort: 6379<br/>  selector:<br/>    app: redis<br/>    appCluster: redis-cluster

Test the service with a client; the cluster is now fully operational.

All steps are complete, and the Redis cluster is ready for use.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

cloud nativeKubernetesRedisdevopsStatefulSetHeadless Service
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.