10 Essential Logging Rules Every Backend Engineer Should Follow

This article presents ten practical guidelines for writing clean, consistent, and performant logs in Java applications, covering unified formatting, stack traces, appropriate log levels, complete parameters, data masking, asynchronous logging, dynamic log level control, trace ID propagation, structured JSON storage, and intelligent monitoring with ELK.

Su San Talks Tech
Su San Talks Tech
Su San Talks Tech
10 Essential Logging Rules Every Backend Engineer Should Follow

Introduction

The article shares ten "military rules" for elegant logging, aiming to help developers produce clear, searchable, and performance‑friendly logs.

Rule 1: Unified Format

Bad example: logs lack timestamps and context.

log.info("start process");
log.error("error happen");

Correct configuration (logback.xml):

<pattern>
  %d{yy-MM-dd HH:mm:ss.SSS} |%X{traceId:-NO_ID} |%thread |%-5level |%logger{36} |%msg%n
</pattern>

This ensures every log entry contains a timestamp, trace ID, thread, level, logger name, and message.

Rule 2: Include Stack Trace on Exceptions

Bad example: catching an exception without logging the stack.

try {
  processOrder();
} catch (Exception e) {
  log.error("处理失败");
}

Correct usage:

log.error("订单处理异常 orderId={}", orderId, e); // e must be logged

The log now records the order ID and the full exception stack.

Rule 3: Reasonable Log Levels

Bad example: using debug for a business exception and error for a simple timeout.

log.debug("用户余额不足 userId={}", userId); // should be WARN
log.error("接口响应稍慢"); // should be INFO

Typical level mapping:

FATAL : system crash (OOM, disk full)

ERROR : core business failure (payment error, order creation failure)

WARN : recoverable issue (retry succeeded, degradation triggered)

INFO : key process milestones (order status change)

DEBUG : debugging details (parameters, intermediate results)

Rule 4: Complete Parameters

Bad example: only the message "用户登录失败" is logged. log.info("用户登录失败"); Detective log (good):

log.warn("用户登录失败 username={}, clientIP={}, failReason={}", username, clientIP, "密码错误次数超限");

This records who, where, and why the login failed. Timestamp formatting is handled by the unified pattern.

Rule 5: Data Masking

To avoid leaking sensitive data, mask fields before logging.

public class LogMasker {
  public static String maskMobile(String mobile) {
    return mobile.replaceAll("(\\d{3})\\d{4}(\\d{4})", "$1****$2");
  }
}
log.info("用户注册 mobile={}", LogMasker.maskMobile("13812345678"));

Rule 6: Asynchronous Logging for Performance

Synchronous logging in high‑traffic scenarios (e.g., flash‑sale) blocks threads and can consume up to 25% of request latency.

Frequent context switches due to sync writes.

Disk I/O becomes the bottleneck.

Log writing dominates response time.

Three‑step async setup:

Step 1 – Async appender in logback.xml

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
  <discardingThreshold>0</discardingThreshold>
  <queueSize>4096</queueSize>
  <appender-ref ref="FILE"/>
</appender>

Step 2 – Optimized logging code

// No pre‑check needed, framework handles it
log.debug("接收到MQ消息:{}", msg.toSimpleString());
// Avoid heavy computation before logging
// Wrong: log.debug("详细内容:{}", computeExpensiveLog());

Step 3 – Capacity formula

maxMemory ≈ queueLength × avgLogSize
recommendedQueueDepth = peakTPS × toleratedDelaySec
// Example: 10000 TPS × 0.5s ⇒ 5000 queue size

Rule 7: Trace ID for End‑to‑End Correlation

Inject a trace ID into MDC and include it in the log pattern.

// Interceptor adds traceId
MDC.put("traceId", UUID.randomUUID().toString().substring(0,8));
// Pattern
<pattern>%d{HH:mm:ss} |%X{traceId}| %msg%n</pattern>

Rule 8: Dynamic Log Level Adjustment

Expose an endpoint to change logger levels at runtime without restarting.

@GetMapping("/logLevel")
public String changeLogLevel(@RequestParam String loggerName, @RequestParam String level) {
  Logger logger = (Logger) LoggerFactory.getLogger(loggerName);
  logger.setLevel(Level.valueOf(level)); // takes effect immediately
  return "OK";
}

Rule 9: Structured JSON Storage

Store logs as JSON to make fields machine‑readable.

{
  "event": "ORDER_CREATE",
  "orderId": 1001,
  "amount": 8999,
  "products": [{"name": "iPhone", "sku": "A123"}]
}

Rule 10: Intelligent Monitoring (ELK)

Use ELK stack for centralized log collection and alerting.

ERROR logs > 5 minutes > 100 entries → phone alert
WARN logs > 1 hour → email notification

Conclusion

The three developer maturity levels are:

Bronze : naive System.out.println("error!") Diamond : standardized logs + ELK monitoring

King : log‑driven code optimization, anomaly‑prediction systems, root‑cause AI models

Final question: When the next production incident occurs, can your logs help a newcomer locate the issue within five minutes?

Monitoringbest practiceslogginglogback
Su San Talks Tech
Written by

Su San Talks Tech

Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.