Backend Development 10 min read

Implementing Distributed Logging with Spring Cloud Sleuth, Zipkin, and the ELK Stack

This guide explains how to set up distributed logging for microservices using Spring Cloud Sleuth for tracing, Zipkin as a trace UI, and the ELK stack (Elasticsearch, Logstash, Kibana) for log collection, storage, and visualization, including detailed configuration and code examples.

Top Architect
Top Architect
Top Architect
Implementing Distributed Logging with Spring Cloud Sleuth, Zipkin, and the ELK Stack

In microservice architectures, services are often distributed across multiple servers, requiring a distributed logging solution. Spring Cloud Sleuth provides tracing capabilities that allow service dependencies to be captured via logs, and when combined with the ELK stack (Elasticsearch, Logstash, Kibana), full log collection and visualization can be achieved.

1. Sleuth

Step 1: Create a dedicated Sleuth management service and add the following Maven dependencies:

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-autoconfigure-ui</artifactId>
    <scope>runtime</scope>
</dependency>

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-server</artifactId>
</dependency>

Configure the service registry address:

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:1111/eureka/

Add discovery and Zipkin annotations to the application entry class:

package com.wlf.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import zipkin.server.EnableZipkinServer;

@EnableDiscoveryClient
@EnableZipkinServer
@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

After starting the service, the Zipkin UI is accessible.

2. Instrumenting Microservice Clients

Add Sleuth starter dependencies to each microservice and configure sampling and Zipkin base URL in application.yml :

spring:
  sleuth:
    sampler:
      percentage: 1
  zipkin:
    base-url: http://localhost:9411

These settings control the proportion of traces collected; a value of 1 (100%) is suitable for development, while production may use a lower percentage to reduce overhead.

3. Setting up ELK

Install Elasticsearch, Logstash, and Kibana. Create a Logstash configuration that receives JSON logs on port 4560 and forwards them to the Elasticsearch cluster:

output {
    input {
        tcp {
            port => 4560
            codec => json_lines
        }
    }
    output {
        elasticsearch {
            hosts => ["192.168.160.66:9200","192.168.160.88:9200","192.168.160.166:9200"]
            index => "applog"
        }
    }
}

Start all components, then use Kibana’s Discover view to query logs, which include the custom rest field and trace identifiers.

4. Logback Configuration

Configure logback-spring.xml to include a Logstash TCP appender that sends logs in JSON format, along with a console and rolling file appender. The configuration also sets the spring.application.name property for use in log fields.

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds">
    <springProperty scope="context" name="springAppName" source="spring.application.name"/>
    <property name="CONSOLE_LOG_PATTERN" value="%date [%thread] %-5level %logger{36} - %msg%n"/>
    
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <withJansi>true</withJansi>
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>
    
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.160.66:4560</destination>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    {"severity":"%level","service":"${springAppName:-}","trace":"%X{X-B3-TraceId:-}","span":"%X{X-B3-SpanId:-}","exportable":"%X{X-Span-Export:-}","pid":"${PID:-}","thread":"%thread","class":"%logger{40}","rest":"%message"}
                </pattern>
            </providers>
        </encoder>
    </appender>
    
    <appender name="dailyRollingFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <File>main.log</File>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <FileNamePattern>main.%d{yyyy-MM-dd}.log</FileNamePattern>
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg %n</Pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>DEBUG</level>
        </filter>
    </appender>
    
    <springProfile name="!production">
        <logger name="com.myfee" level="DEBUG"/>
        <logger name="org.springframework.web" level="INFO"/>
        <root level="info">
            <appender-ref ref="stdout"/>
            <appender-ref ref="dailyRollingFileAppender"/>
            <appender-ref ref="logstash"/>
        </root>
    </springProfile>
    
    <springProfile name="production">
        <logger name="com.myfee" level="DEBUG"/>
        <logger name="org.springframework.web" level="INFO"/>
        <root level="info">
            <appender-ref ref="stdout"/>
            <appender-ref ref="dailyRollingFileAppender"/>
            <appender-ref ref="logstash"/>
        </root>
    </springProfile>
</configuration>

With the services, Sleuth, and ELK stack running, you can view trace relationships and log details across all microservices directly in Kibana.

MicroservicesLogbackSpring CloudELKdistributed loggingZipkinSleuth
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.