Big Data 16 min read

Spring Cloud Stream with Apache Kafka – Overview, Programming Model, and Advanced Features (Part 2)

This article explains how Spring Cloud Stream integrates with Apache Kafka, covering its programming model, configuration, code examples, topic provisioning, consumer groups, partitioning, monitoring, error handling, schema evolution, and Kafka Streams support for building robust streaming microservices.

Architects Research Society
Architects Research Society
Architects Research Society
Spring Cloud Stream with Apache Kafka – Overview, Programming Model, and Advanced Features (Part 2)

Following the first part of the series on Spring and Kafka, this second part focuses on Spring Cloud Stream, a framework that simplifies building message‑driven microservices on top of Apache Kafka.

Spring Cloud Stream provides a type‑safe programming model built on Spring Boot, Spring Integration, Spring Cloud Function, and Project Reactor, allowing developers to define sources, sinks, and processors that map to Kafka topics.

The framework uses a binder abstraction; the Kafka binder connects input and output bindings to Kafka topics, handling publish/subscribe semantics, consumer groups, and partitioning automatically.

To create a new application, use Spring Initializr with the "Cloud Stream" and "Kafka" dependencies, then add the @EnableBinding annotation and appropriate interfaces (Source, Sink, Processor) to bind to topics.

Example of a simple processor:

@SpringBootApplication
@EnableBinding(Processor.class)
public class UppercaseProcessor {
    @StreamListener(Processor.INPUT)
    @SendTo(Processor.OUTPUT)
    public String process(String s) {
        return s.toUpperCase();
    }
}

Configuration of input and output destinations is done via an application.yml (or properties) file:

spring.cloud.stream.bindings:
  input:
    destination: topic1
  output:
    destination: topic2

Spring Cloud Stream also supports native encoding/decoding, auto‑provisioning of topics, consumer groups, partitioning, actuator endpoints for binding control, Micrometer metrics, and health checks.

For Kafka Streams, the framework provides a dedicated binder that lets developers write KStream, KTable, or GlobalKTable logic without dealing with low‑level stream topology code.

@SpringBootApplication
@EnableBinding(StreamTableProcessor.class)
public class KafkaStreamsTableJoin {
    @StreamListener
    @SendTo("output")
    public KStream
process(@Input("input1") KStream
clicks,
                                         @Input("input2") KTable
regions) {
        return clicks.leftJoin(regions,
                (c, r) -> new RegionWithClicks(r == null ? "UNKNOWN" : r, c),
                Joined.with(Serdes.String(), Serdes.Long(), null))
                .map((k, v) -> new KeyValue<>(v.getRegion(), v.getClicks()))
                .groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
                .reduce(Long::sum)
                .toStream();
    }
}

interface StreamTableProcessor {
    @Input("input1") KStream
inputStream();
    @Output("output") KStream
outputStream();
    @Input("input2") KTable
inputTable();
}

The binder also enables interactive queries of state stores, error handling with dead‑letter queues, and schema evolution via Confluent Schema Registry.

Overall, Spring Cloud Stream abstracts away much of the operational boilerplate, allowing developers to focus on business logic while handling topics, serialization, monitoring, and fault tolerance automatically.

JavaBig DataMicroservicesStreamingApache KafkaSpring Cloud Stream
Architects Research Society
Written by

Architects Research Society

A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.