Backend Development 9 min read

Understanding Kafka Producer: Architecture, Data Structures, Serialization, Partitioning, and Buffering

This article provides a comprehensive overview of Kafka's Producer side, covering its architecture, the ProducerRecord data structure, serialization mechanisms, partitioning logic, and the accumulator buffer, while comparing old and new Producer clients and illustrating key configurations with code examples.

Big Data Technology Architecture
Big Data Technology Architecture
Big Data Technology Architecture
Understanding Kafka Producer: Architecture, Data Structures, Serialization, Partitioning, and Buffering

We first recap the core parameters of the Kafka broker and then focus on the Producer side, aiming to give a complete understanding of its principles.

1. Producer Architecture

The basic flow of message sending from the Producer is illustrated in the diagram below.

The Producer sends data using a Serializer, a Partitioner, and an Accumulator (message buffer), and may also involve an Interceptor (not covered here).

2. Client and Data Structures

2.1 New and Old Producer

Since Kafka 0.8.2 a new Producer client (org.apache.kafka.clients.producer.KafkaProducer) has been introduced and is recommended from version 0.9.0 onward. The old client (kafka.javaapi.producer.Producer) connects to Zookeeper, while the new client connects directly to the broker and sends messages asynchronously.

org.apache.kafka.clients.producer.KafkaProducer<K,V>
kafka.javaapi.producer.Producer<K,V>

2.2 Message Data Structure

Kafka abstracts a message to be sent as a ProducerRecord object with the following fields:

public class ProducerRecord<K, V> {
private final String topic; // target topic
private final Integer partition; // target partition
private final Headers headers; // message headers
private final K key; // message key
private final V value; // message value
private final Long timestamp; // message timestamp
// constructors and methods omitted
}

The six core attributes are topic, partition, headers, key, value, and timestamp; headers were added in Kafka 0.11.x.

3. Serialization Mechanism

3.1 Serialization and Deserialization

Producers use a Serializer to convert objects to byte arrays for network transmission, while consumers use a Deserializer to reconstruct objects from bytes. Configuration is as simple as setting key.serializer and value.serializer .

props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

3.2 Default Serializers

Kafka provides many built‑in serializers, such as ByteArraySerializer , StringSerializer , LongSerializer , etc. Custom serializers can be implemented by extending the Kafka Serializer interface.

4. Message Partitioning Mechanism

4.1 Topic Partitions

Partitions provide load balancing and high throughput; each message is sent to a specific partition where ordering is guaranteed.

4.2 Partitioner

If a partition is not explicitly set in the ProducerRecord , the Partitioner determines the target partition. The default implementation is org.apache.kafka.clients.producer.internals.DefaultPartitioner .

4.3 Partitioning Strategy

When the key is null, messages are distributed round‑robin (or randomly) across partitions; when a key is present, the default partitioner hashes the key (using Murmur2) and mods by the number of partitions.

5. Message Buffer (Accumulator)

5.1 Buffer Introduction

After serialization and partitioning, messages are placed into the client‑side buffer (Accumulator) before being sent by a Sender thread.

The buffer size is controlled by buffer.memory (default 32 MB). If the buffer fills, the producer blocks for max.block.ms before throwing an exception.

5.2 Batch Sending

Messages are grouped into batches; the batch size is controlled by batch.size (default 16 KB). A batch is sent either when it reaches the size limit or when the linger time linger.ms expires.

6. Summary

This article introduced the Producer client and the ProducerRecord structure, then detailed the roles of serializers, partitioners, and the accumulator buffer. Some advanced topics such as interceptors and the Sender thread were omitted and will be covered in future posts.

backend developmentSerializationKafkaproducerpartitioningAccumulator
Big Data Technology Architecture
Written by

Big Data Technology Architecture

Exploring Open Source Big Data and AI Technologies

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.