Backend Development 12 min read

Integrating Apache Kafka with Spring Boot: Concepts, Architecture, and Code Example

This article introduces Apache Kafka, explains its publish‑subscribe and point‑to‑point models, outlines why it is used for decoupling and high‑availability, and provides a step‑by‑step Spring Boot integration guide with configuration, producer and consumer code samples.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
Integrating Apache Kafka with Spring Boot: Concepts, Architecture, and Code Example

Spring Boot is a mainstream microservice framework with a mature ecosystem, and this guide presents a quick‑start handbook for integrating common middleware, focusing on Apache Kafka.

Message Communication Models

There are two basic models: publish‑subscribe (Pub‑Sub) for one‑to‑many communication and point‑to‑point for a single consumer.

Introduction to Kafka

Kafka, developed by the Apache Software Foundation in Scala and Java, is an open‑source stream‑processing platform that provides a unified, high‑throughput, low‑latency solution for real‑time data. Its persistence layer is essentially a distributed transaction‑log‑based publish/subscribe queue.

Kafka efficiently handles real‑time streams and integrates with Storm, HBase, Spark, etc. It runs on a cluster of servers and offers four APIs: Producer, Consumer, Stream, and Connector.

Why Use Kafka?

Peak shaving: buffers burst traffic to protect downstream services from overload.

System decoupling: loose coupling reduces direct dependencies and development cost.

Asynchronous communication: messages can be queued without immediate processing.

Recoverability: queued messages survive process failures and are processed after recovery.

Business Scenarios

Non‑core synchronous logic that can be executed asynchronously.

System log collection and forwarding to ELK stacks.

Data transfer between big‑data platforms.

Basic Architecture

Kafka runs on a cluster of one or more servers; partitions are distributed across broker nodes.

1. Producer sends messages to a broker. 2. The leader broker writes messages to the appropriate topic and stores them with offsets. 3. The leader replicates data to follower brokers. 4. Consumers subscribe to partitions and consume messages.

Common Terminology

Broker: handles client requests and persists messages; typically deployed on separate machines.

Topic: logical container for messages, used to separate business domains.

Partition: ordered, immutable sequence of messages; a topic can have multiple partitions.

Offset: monotonically increasing position of a message within a partition.

Replica: copies of a message for redundancy; includes leader and follower replicas.

Leader: the replica that handles reads and writes for a partition.

Follower: replicates data from the leader; can become leader on failure.

Producer: application that publishes messages to a topic.

Consumer: application that subscribes to topics to receive messages.

Consumer Offset: tracks each consumer’s progress; stored in an internal broker topic.

Consumer Group: a set of consumer instances that jointly consume partitions for high throughput.

Rebalance: automatic redistribution of partitions among consumers when a member fails.

Code Demonstration

External Dependency:

Add the Kafka dependency in pom.xml :

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>

Configuration File:

Configure Kafka parameters in application.yaml :

Spring:
  kafka:
    bootstrap-servers: localhost:9092
    producer:
      retries: 3 # retry count on failure
      batch-size: 16384
      buffer-memory: 33554432
      key-serializer: org.apache.kafka.common.serialization.StringSerializer # key serializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      group-id: tomge-consumer-group # default consumer group id
      auto-offset-reset: earliest
      enable-auto-commit: true
      auto-commit-interval: 100
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

The configuration class org.springframework.boot.autoconfigure.kafka.KafkaProperties initializes the necessary beans.

Sending Messages:

Spring Boot provides KafkaTemplate for message production. A typical send method signature is:

public ListenableFuture
> send(String topic, @Nullable V data) { ... }

A REST endpoint demonstrates sending a new user message:

@GetMapping("/add_user")
public Object add() {
    try {
        Long id = Long.valueOf(new Random().nextInt(1000));
        User user = User.builder().id(id).userName("TomGE").age(29).address("上海").build();
        ListenableFuture
future = kafkaTemplate.send(addUserTopic, JSON.toJSONString(user));
        future.addCallback(new ListenableFutureCallback
() {
            @Override
            public void onFailure(Throwable throwable) {
                System.out.println("发送消息失败," + throwable.getMessage());
            }
            @Override
            public void onSuccess(SendResult sendResult) {
                String topic = sendResult.getRecordMetadata().topic();
                int partition = sendResult.getRecordMetadata().partition();
                long offset = sendResult.getRecordMetadata().offset();
                System.out.println(String.format("发送消息成功,topc:%s, partition: %s, offset:%s ", topic, partition, offset));
            }
        });
        return "消息发送成功";
    } catch (Exception e) {
        e.printStackTrace();
        return "消息发送失败";
    }
}

Note: By default Kafka auto‑creates topics with one partition; this can be changed via num.partitions in server.properties . In production, auto‑creation is usually disabled.

Consuming Messages:

Define a consumer class with @KafkaListener to listen to a topic:

@Component
public class UserConsumer {
    @KafkaListener(topics = "add_user")
    public void receiveMessage(String content) {
        System.out.println("消费消息:" + content);
    }
}

The combination of the Kafka dependency, KafkaTemplate , and @KafkaListener enables full‑stack message production and consumption with Spring Boot.

Demo Project

GitHub repository: https://github.com/aalansehaiyang/spring-boot-bulking Module: spring-boot-bulking-kafka

Recommended Reading

Why MySQL Chooses RR as the Default Isolation Level?

Story: Evolution of Database Architecture

35 Pictures of MySQL Tuning

Distributed SystemsJavaKafkaMessagingspring-boot
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.