Getting Started with Kafka’s New KRaft Mode: A Step‑by‑Step Guide
This article introduces Apache Kafka’s KRaft (Kafka Raft) mode, explains its architectural differences from ZooKeeper‑based deployments, details essential configuration parameters, and provides a complete step‑by‑step procedure—including commands and utility tools—to set up and operate a KRaft cluster.
KRaft Overview
Apache Kafka 3.0 introduces KRaft (Kafka Raft metadata mode) which removes the dependency on Apache ZooKeeper. Metadata is stored in an internal Raft‑based quorum managed by controller nodes.
Architecture Changes
In KRaft mode the ZooKeeper ensemble is replaced by a set of controller nodes forming a Raft quorum. Brokers retrieve metadata from the controller quorum instead of ZooKeeper. Controllers can be separate processes or co‑located with brokers (combined nodes). The leader controller pushes metadata updates to brokers, which pull the updates.
Key Configuration Parameters
process.roles=broker,controller
node.id=1
controller.quorum.voters=1@localhost:9093process.roles
broker– the server acts only as a broker. controller – the server acts only as a controller. broker,controller – the server acts as both.
If omitted, the cluster runs in legacy ZooKeeper mode.
controller.quorum.voters
Specifies the set of controller nodes that participate in the Raft quorum. The format is id@host:port and must be present in every broker and controller configuration. Example for three controllers:
process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
[email protected]:9093,[email protected]:9093,[email protected]:9093Running a KRaft Cluster
Generate a unique cluster ID:
$ ./bin/kafka-storage.sh random-uuid
xtzWWN4bTjitpL3kfd9s5gFormat the storage directories on each node using the same cluster ID:
$ ./bin/kafka-storage.sh format -t xtzWWN4bTjitpL3kfd9s5g -c ./config/kraft/server.propertiesStart the Kafka server with the KRaft configuration:
$ ./bin/kafka-server-start.sh ./config/kraft/server.propertiesTypical Client Operations
$ ./bin/kafka-topics.sh --create --topic foo --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic foo
$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic foo --group foo_groupUtility Tools
kafka-dump-log.sh can decode Raft metadata logs:
$ ./bin/kafka-dump-log.sh --cluster-metadata-decoder --skip-record-metadata --files /tmp/kraft-combined-logs/@metadata-0/*.logkafka-metadata-shell.sh provides a ZooKeeper‑like shell for the internal @metadata topic:
$ ./bin/kafka-metadata-shell.sh --snapshot /tmp/kraft-combined-logs/@metadata-0/00000000000000000000.log
>> ls /
brokers local metadataQuorum topicIds topics
>> ls /topics
foo
>> cat /topics/foo/0/data
{
"partitionId" : 0,
"topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
"replicas" : [ 1 ],
"isr" : [ 1 ],
"leader" : 1,
"leaderEpoch" : 0,
"partitionEpoch" : 0
}Considerations
Switching between ZooKeeper and KRaft modes requires reformatting the storage directories; it cannot be done in‑place.
Controller nodes should be deployed on an odd number of machines (e.g., 3 or 5) to tolerate failures according to Raft quorum rules.
Combined broker‑controller nodes simplify deployment but share JVM resources, which may affect isolation and fault tolerance.
Tencent Cloud Middleware
Official account of Tencent Cloud Middleware. Focuses on microservices, messaging middleware and other cloud‑native technology trends, publishing product updates, case studies, and technical insights. Regularly hosts tech salons to share effective solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
