Big Data 10 min read

Step-by-Step Guide to Building a Kafka 3.0 Cluster with KRaft

This tutorial walks through planning roles, preparing the environment, configuring KRaft, formatting storage, and launching a Kafka 3.0 cluster with scripts for both startup and graceful shutdown, providing all commands and explanations needed for a production-ready setup.

Sanyou's Java Diary
Sanyou's Java Diary
Sanyou's Java Diary
Step-by-Step Guide to Building a Kafka 3.0 Cluster with KRaft

1. Kafka Cluster Role Planning

Kafka 3.0 can eliminate Zookeeper by using the KRaft protocol for controller election. The article compares Kafka 2.0 (single broker role with Zookeeper) and Kafka 3.0 (multiple brokers, some acting as controllers, metadata stored in Kafka logs). For a four‑broker cluster, three brokers are designated as controllers (an odd number is required).

Left diagram (Kafka 2.0): all nodes are brokers; one broker is elected as the controller and stores metadata in Zookeeper.

Right diagram (Kafka 3.0): among four brokers, three are assigned the controller role; one of them becomes the active controller while the others are standby. No Zookeeper is needed; metadata is kept in Kafka logs.

Host‑role mapping (host, IP, role, node.id):

zimug1 – 192.168.1.111 – broker,controller – node.id=1

zimug2 – 192.168.1.112 – broker,controller – node.id=2

zimug3 – 192.168.1.113 – broker,controller – node.id=3

zimug4 – 192.168.1.113 – broker – node.id=4

2. Preparation

Create a directory for Kafka 3 installation under a non‑root user and download the 3.1.0 package.

Install JDK 11 or 17 (Kafka 3.0 no longer supports JDK 8).

Create a directory for persistent log data and ensure the Kafka user has read/write permissions.

Open ports 9092 and 9093 on all servers (used by brokers and controllers).

Extract the package to /home/kafka (or a custom path with -C ).

<code>mkdir kafka3-setup;
cd kafka3-setup/;
wget https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz</code>
<code>tar -xzvf ./kafka_2.13-3.1.0.tgz -C /home/kafka</code>

3. Modify KRaft Configuration

Edit /home/kafka/kafka_2.13-3.1.0/config/kraft/server.properties and set the following parameters:

<code>node.id=1
process.roles=broker,controller
listeners=PLAINTEXT://zimug1:9092,CONTROLLER://zimug1:9093
advertised.listeners=PLAINTEXT://:9092
controller.quorum.voters=1@zimug1:9093,2@zimug2:9093,3@zimug3:9093
log.dirs=/home/kafka/data/kafka3</code>

node.id uniquely identifies a node (equivalent to broker.id in older versions).

process.roles defines whether the node acts as broker, controller, or both.

listeners assigns port 9092 for broker traffic and 9093 for controller traffic.

advertised.listeners specifies the address clients use to reach the broker (LAN case uses PLAINTEXT://:9092 ).

controller.quorum.voters lists all nodes that participate in controller election.

log.dirs points to the directory created in the preparation step.

4. Format Storage Directory

Generate a unique cluster ID (run once on any node) and format the log directory with that ID.

<code>/home/kafka/kafka_2.13-3.1.0/bin/kafka-storage.sh random-uuid</code>

Assume the generated ID is SzIhECn-QbCLzIuNxk1A2A .

<code>/home/kafka/kafka_2.13-3.1.0/bin/kafka-storage.sh format \
  -t SzIhECn-QbCLzIuNxk1A2A \
  -c /home/kafka/kafka_2.13-3.1.0/config/kraft/server.properties</code>

After formatting, a meta.properties file appears in log.dirs , containing node.id , cluster.id , and version.

<code>#
#Tue Apr 12 07:39:07 CST 2022
node.id=1
version=1
cluster.id=SzIhECn-QbCLzIuNxk1A2A</code>

5. Start Cluster and Basic Test

Save the following script, give it execute permission, and run it after configuring password‑less SSH between the three hosts ( zimug1 zimug2 zimug3 ).

<code>#!/bin/bash
kafkaServers='zimug1 zimug2 zimug3'
for kafka in $kafkaServers; do
  ssh -T $kafka <<EOF
  nohup /home/kafka/kafka_2.13-3.1.0/bin/kafka-server-start.sh /home/kafka/kafka_2.13-3.1.0/config/kraft/server.properties 1>/dev/null 2>&1 &
EOF
  echo "Started kafka on $kafka..."
done
sleep 5
</code>

Adjust the /home/kafka/kafka_2.13-3.1.0 path if your installation directory differs.

6. One‑Click Stop Script

Use the following script to stop all nodes in the same manner.

<code>#!/bin/bash
kafkaServers='zimug1 zimug2 zimug3'
for kafka in $kafkaServers; do
  ssh -T $kafka <<EOF
  cd /home/kafka/kafka_2.13-3.1.0
  bin/kafka-server-stop.sh
EOF
  echo "Stopped kafka on $kafka..."
done
sleep 5
</code>
Big DataStreamingKafkaCluster SetupKRaft
Sanyou's Java Diary
Written by

Sanyou's Java Diary

Passionate about technology, though not great at solving problems; eager to share, never tire of learning!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.