How to Build a High‑Performance, Non‑Blocking Go Logger: Lessons from easyms.golang

This article examines common pitfalls of asynchronous Go logging—such as channel blocking, GC pressure, and poor API design—and presents concrete solutions using non‑blocking writes, sync.Pool object reuse, and context‑aware variadic APIs, all demonstrated with real code from the easyms.golang project.

Code Wrench
Code Wrench
Code Wrench
How to Build a High‑Performance, Non‑Blocking Go Logger: Lessons from easyms.golang

Logging is often overlooked until a production incident reveals it as either a lifesaver or a hidden killer; a poorly designed logger can quickly become a bottleneck under high concurrency.

🟢 Starting Point: The Classic Asynchronous Logging Trap

Many tutorials suggest a simple async logger that decouples logging via a buffered chan, processes entries in a background goroutine, and batches writes. The naive implementation looks like this:

// ⛔️ A seemingly perfect but hazardous design
var logChan = make(chan LogEntry, 1000)

func Info(msg string) {
    // Business goroutine just pushes into the channel
    logChan <- LogEntry{Level: "info", Message: msg}
}

func logProcessor() {
    // Background goroutine slowly consumes
    for entry := range logChan {
        flush(entry)
    }
}

While syntactically correct, this design blocks the entire business flow when the channel fills up—common in high‑throughput or disk‑I/O‑jitter scenarios.

🛡️ Advanced 1: Reject Blocking – “Lossy but Life‑Saving” Mechanism

🔴 Pain Point: What Happens When the Channel Is Full?

When logChan <- entry is executed on a full channel, the call blocks, causing every Info() caller to stall. The solution is to make the write non‑blocking and drop logs when the channel is saturated, while recording metrics.

✅ easyms.golang Solution: Non‑Blocking Write + Degradation Strategy

In internal/shared/logger/logger.go we use a select‑default pattern:

func submitLog(entry *LogEntry) {
    select {
    case logChan <- entry:
        // 🚀 Happy path: successfully sent
    default:
        // ⚠️ Survival path: channel full, drop the entry
        logDroppedTotal.WithLabelValues(entry.Service).Inc()
        // Return the object to the pool to avoid memory leaks
        logPool.Put(entry)
        // Optionally print to stderr so operators know a log was lost
    }
}

The philosophy is “keep the business alive > keep the logs alive”. Metrics such as logDroppedTotal expose dropped events via Prometheus, making the system safer than a blocking logger.

⚡️ Advanced 2: Crush GC Pressure with sync.Pool

🔴 Pain Point: Hundreds of Thousands of Log Objects per Second

Each call to logger.Info creates a new &LogEntry, causing massive allocations and GC cycles, which spikes CPU usage.

✅ easyms.golang Solution: Object Reuse Pool

We define a sync.Pool for LogEntry objects:

var logPool = sync.Pool{
    New: func() interface{} {
        // Pre‑allocate map capacity to reduce growth overhead
        return &LogEntry{Fields: make(map[string]interface{}, 8)}
    },
}

Usage pattern:

// Sender
func Info(msg string) {
    entry := logPool.Get().(*LogEntry) // 1️⃣ Borrow from pool
    entry.Message = msg
    submitLog(entry)
}

// Consumer
func flushLogs(logs []*LogEntry) {
    // Write to backend I/O …
    for _, e := range logs {
        e.Reset()               // 🧹 Clear data
        logPool.Put(e)           // 2️⃣ Return to pool
    }
}

After this change, allocation rates drop dramatically and GC impact becomes negligible.

🎨 Advanced 3: Human‑Friendly API – Logging Like Poetry

🔴 Pain Point: Context Loss and Parameter Hell

Traditional APIs require manual passing of fields such as request_id, leading to verbose and error‑prone code.

✅ easyms.golang Solution: Context Logger & Variadic API

Inspired by Zap and Slog, we introduce a log context that automatically carries fields:

func handleRequest(r *http.Request) {
    // Derive a logger with request_id at the entry point
    reqLogger := logger.With("request_id", r.Header.Get("X-Request-ID"))
    processUser(reqLogger)
}

func processUser(l *Logger) {
    // The request_id is automatically included
    l.Info("Processing user", "user_id", 1001, "status", "active")
}

We also flatten the key‑value style, allowing calls like:

// Before
logger.Info("Login", map[string]interface{}{ "ip": "1.1.1.1", "user": "admin" })

// After
logger.Info("Login", "ip", "1.1.1.1", "user", "admin")

🎯 Summary and Outlook

Reliability : Use non‑blocking channels with a drop‑policy so logging never stalls business logic.

Performance : Reuse log entries via sync.Pool to minimize allocations and GC overhead.

Experience : Provide a context‑aware, variadic API that keeps logs concise and automatically propagates tracing information.

Although a logging module may seem small, it reflects the overall engineering maturity of a project; a well‑designed logger becomes a critical observability tool during incident analysis.

Source code repositories:

GitHub: https://github.com/louis-xie-programmer/easyms.golang

Gitee: https://gitee.com/louis_xie/easyms.golang

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

loggingapi-designgc
Code Wrench
Written by

Code Wrench

Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.