Integrating Spring Cloud Sleuth with ELK for Distributed Logging in Microservices
This article demonstrates how to set up distributed logging for Spring Cloud microservices using Sleuth for tracing and the ELK stack (Elasticsearch, Logstash, Kibana), covering dependency configuration, service registration, Logback setup, Logstash pipelines, and log querying in Kibana.
In microservice architectures, services are often spread across multiple servers, requiring a distributed logging solution. Spring Cloud provides the Sleuth component for tracing services via logs, and the ELK stack (Elasticsearch, Logstash, Kibana) can be used to collect and visualize those logs.
1. Sleuth Management Service
Add the following Maven dependencies to a dedicated project:
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-server</artifactId>
</dependency>Configure the Eureka server address:
eureka:
client:
serviceUrl:
defaultZone: http://localhost:1111/eureka/Add the Zipkin and discovery annotations to the Spring Boot entry class:
package com.wlf.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import zipkin.server.EnableZipkinServer;
@EnableDiscoveryClient
@EnableZipkinServer
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}After starting this service, the Zipkin UI is accessible.
2. Microservice Side (Being Traced)
Add Sleuth and Zipkin starter dependencies:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>Configure Sleuth and Zipkin in application.yml (or application.yml style shown):
spring:
sleuth:
sampler:
percentage: 1
zipkin:
base-url: http://localhost:9411The property spring.sleuth.sampler.percentage controls the proportion of requests traced (1 means 100%).
3. ELK Stack Setup
Install Elasticsearch, Kibana, and Logstash. Create a Logstash configuration file (e.g., logstash.conf ) with the following content:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["192.168.160.66:9200", "192.168.160.88:9200", "192.168.160.166:9200"]
index => "applog"
}
}Start Logstash with bin/logstash -f logstash.conf , then launch Elasticsearch, Kibana, and the microservices.
In Kibana, create an index pattern named applog to view the collected logs.
4. Logback Configuration for Logstash
Configure logback-spring.xml to send logs to Logstash and to a rolling file. A simplified example:
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds">
<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<property name="CONSOLE_LOG_PATTERN" value="%date [%thread] %-5level %logger{36} - %msg%n"/>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<withJansi>true</withJansi>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>192.168.160.66:4560</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>{
"severity":"%level",
"service":"${springAppName:-}",
"trace":"%X{X-B3-TraceId:-}",
"span":"%X{X-B3-SpanId:-}",
"exportable":"%X{X-Span-Export:-}",
"pid":"${PID:-}",
"thread":"%thread",
"class":"%logger{40}",
"rest":"%message"
}</pattern>
</pattern>
</providers>
</encoder>
</appender>
<appender name="dailyRollingFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>main.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>main.%d{yyyy-MM-dd}.log</FileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg %n</Pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
</appender>
<springProfile name="!production">
<logger name="com.myfee" level="DEBUG"/>
<logger name="org.springframework.web" level="INFO"/>
<root level="info">
<appender-ref ref="stdout"/>
<appender-ref ref="dailyRollingFileAppender"/>
<appender-ref ref="logstash"/>
</root>
</springProfile>
<springProfile name="production">
<logger name="com.myfee" level="DEBUG"/>
<logger name="org.springframework.web" level="INFO"/>
<root level="info">
<appender-ref ref="stdout"/>
<appender-ref ref="dailyRollingFileAppender"/>
<appender-ref ref="logstash"/>
</root>
</springProfile>
</configuration>The rest field in the JSON payload stores the original log message, while trace and span enable end‑to‑end request tracing.
5. Querying Logs
After all services and the ELK stack are running, invoke any microservice endpoint. Logs appear in the console, are sent to Logstash, stored in Elasticsearch, and can be searched in Kibana using the applog index. The trace and span IDs allow you to reconstruct the full call chain across services.
With this setup, distributed logs are centrally collected, searchable, and visualizable, facilitating debugging and monitoring of microservice systems in production.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.