Master Java Logging: From Basics to Advanced Practices

This guide walks a junior developer through why logging is essential, how to configure Logback in Spring Boot, use Lombok @Slf4j, choose appropriate log levels, apply parameterized messages, control output volume, enable asynchronous logging, manage log files with rolling policies, and integrate a centralized ELK stack for distributed systems.

dbaplus Community
dbaplus Community
dbaplus Community
Master Java Logging: From Basics to Advanced Practices

What is logging?

Logging records runtime state and events of a Java application, enabling developers to locate failures without reproducing the problem manually.

Logging in Spring Boot

Spring Boot ships with Logback as the default logging implementation, so no extra dependencies are required for basic logging.

Obtaining a Logger

Manual creation via LoggerFactory:

public class MyService {
    private static final Logger logger = LoggerFactory.getLogger(MyService.class);
}

Using the current instance:

public class MyService {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());
}

After acquiring the logger, call logger.info(), logger.debug(), logger.warn(), or logger.error() to emit messages.

Generating a logger with Lombok

The Lombok annotation @Slf4j injects a static org.slf4j.Logger field named log, eliminating boilerplate.

@Slf4j
public class MyService {
    public void doSomething() {
        log.info("Executing some operation");
    }
}

Log levels

DEBUG – detailed troubleshooting information.

INFO – normal business flow messages.

WARN – potential issues that do not stop the main flow.

ERROR – exceptions or failures.

log.debug("User details: {}", userDto);
log.info("User {} import started", username);
log.warn("User {} email looks suspicious", username);
log.error("User {} import failed", username, e);

Parameterized logging

Use {} placeholders; the framework substitutes arguments only when the log is actually emitted, avoiding unnecessary string concatenation.

log.info("User {} import started", username); // preferred
// log.info("User " + username + " import started"); // avoid

Controlling log output volume

Log only every N records:

if ((i + 1) % 100 == 0) {
    log.info("Batch progress: {}/{}", i + 1, userList.size());
}

Accumulate messages and log once after the loop:

StringBuilder sb = new StringBuilder("Result: ");
for (UserDTO dto : userList) {
    // process dto
    sb.append(String.format("[ID=%s] ", dto.getId()));
}
log.info(sb.toString());

Filter low‑level logs in logback.xml:

<appender name="FILE" class="ch.qos.logback.core.FileAppender">
    <file>logs/app.log</file>
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>INFO</level>
    </filter>
</appender>

Unified log format

Define a pattern that includes timestamp, thread, level, logger name and the message.

<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
        <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>
</appender>

Asynchronous logging

Configure an AsyncAppender to off‑load I/O to a separate thread, improving throughput at the risk of losing logs on abrupt crashes.

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <queueSize>512</queueSize>
    <discardingThreshold>0</discardingThreshold>
    <neverBlock>false</neverBlock>
    <appender-ref ref="FILE" />
</appender>
<root level="INFO">
    <appender-ref ref="ASYNC" />
</root>

Log management (rolling & compression)

Use a size‑and‑time based rolling policy to split logs by date and size, keep a limited history, and optionally compress old files.

<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
    <fileNamePattern>logs/app-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
    <maxFileSize>10MB</maxFileSize>
    <maxHistory>30</maxHistory>
</rollingPolicy>

Mapped Diagnostic Context (MDC)

MDC adds request‑scoped metadata (e.g., requestId, userId) to each log line, which is useful for tracing in distributed systems.

@PostMapping("/user/import")
public Result importUsers(@RequestBody UserImportRequest request) {
    MDC.put("requestId", generateRequestId());
    MDC.put("userId", String.valueOf(request.getUserId()));
    try {
        log.info("User import request received");
        userService.batchImport(request.getUserList());
        return Result.success();
    } finally {
        MDC.clear();
    }
}

Corresponding pattern in logback.xml:

<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - [%X{requestId}] [%X{userId}] %msg%n</pattern>

Centralized log collection (ELK)

For micro‑service architectures, forward logs to Logstash, store them in Elasticsearch, and visualize with Kibana.

# Logstash input (example)
input {
  file {
    path => "/var/log/app/*.log"
    start_position => "beginning"
  }
}

output {
  elasticsearch { hosts => ["http://es-host:9200"] }
  stdout { codec => rubydebug }
}

After deployment, logs can be queried, filtered and visualized across all services, simplifying root‑cause analysis.

JavaSpring BootELK
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.