Why Printing Logs Is a Mistake: Deep Dive into Python’s Three Major Logging Solutions

After a chaotic production alert, the author, a decade‑long backend developer, compares Python’s built‑in logging, Loguru, and Logfire, showing their configurations, strengths, pitfalls, and best‑fit scenarios—from simple cron jobs to high‑throughput API gateways—so you can choose the right tool for reliable, observable logging.

Data STUDIO
Data STUDIO
Data STUDIO
Why Printing Logs Is a Mistake: Deep Dive into Python’s Three Major Logging Solutions

Introduction

At 2:13 am a production alarm reveals a mess of log formats, missing timestamps, and fragmented request IDs. The author, with ten years of backend experience, explains why relying on print statements is dangerous and introduces three Python logging solutions: the standard logging module, Loguru , and the newer Logfire .

1. Standard Library logging

It is described as a “Swiss‑army knife” that can do everything but requires a lot of manual wiring. A typical configuration looks like:

import logging

# Create logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)

# File handler
handler = logging.FileHandler("app.log")
formatter = logging.Formatter("%(asctime)s | %(levelname)s | %(name)s | %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)

def process_order(order_id):
    logger.info("Processing order %s", order_id)
    try:
        do_work(order_id)
    except Exception:
        logger.exception("Order failed")

Four major pain points are highlighted:

Verbose configuration (10‑line templates per project, choice between dictConfig and basicConfig).

Context propagation requires LoggerAdapter or custom filters, often inflating code.

Exception logging must be inside except blocks with exc_info=True, otherwise stack traces are lost.

Structured JSON output requires manual dictionary assembly.

A warning notes that the % ‑style formatting is lazily evaluated, while f‑strings evaluate eagerly and can hurt performance.

Positioning : Suitable for large enterprises with strict compliance or environments that cannot add third‑party dependencies.

2. Loguru

Marketed as “developer‑first”, Loguru works out‑of‑the‑box with minimal code.

Zero‑configuration start

from loguru import logger
logger.info("Service started")

The output automatically includes timestamp, level, file path, line number, and colored level indicators.

File rotation in one line

logger.add("logs/app.log",
    rotation="10 MB",      # auto‑rotate
    retention="7 days",
    level="INFO",
    format="{time} | {level} | {message}")

Compared with the standard library’s RotatingFileHandler and TimedRotatingFileHandler, Loguru’s parameters read like natural language.

Elegant exception handling

try:
    risky_call()
except Exception:
    logger.exception("Risky call failed")  # full stack captured automatically

No need to pass exc_info or format the traceback manually.

Context binding

# Bind request_id
request_logger = logger.bind(request_id="req_123")
request_logger.info("Incoming request")

In a FastAPI middleware the author shows how to attach a request ID to every log entry, enabling fast greps for a whole request trace.

Async‑friendly logging

logger.add("logs/app.json",
    serialize=True,
    enqueue=True,          # non‑blocking background writer
    rotation="50 MB",
    level="INFO")

Benchmarks indicate that enqueue=True reduces latency spikes by over 60 % in high‑QPS services.

Production‑grade setup

def setup_logging():
    logger.remove()  # drop default stderr output
    # Console (dev)
    logger.add(sys.stdout, level="INFO", format="{time} | {level} | {message}")
    # File (prod)
    logger.add("logs/app.log",
        level="INFO",
        rotation="100 MB",
        retention="10 days",
        enqueue=True,
        compression="zip")

This configuration is roughly ten times shorter than an equivalent dictConfig setup.

Positioning : Ideal for microservices, CLI tools, data pipelines, or any internal system where logging should not be a burden.

3. Logfire

Logfire is presented as a paradigm shift that bridges logging with observability platforms. It automatically captures structured data and streams it to OpenTelemetry, Grafana, Datadog, etc., linking logs with metrics and traces.

In an error‑spike scenario, Logfire shows a dashboard with error curves; clicking a point reveals the corresponding log line, call stack, DB query latency, and even CPU/memory snapshots.

Target audience: distributed systems, microservice architectures, SRE teams, especially those already on Kubernetes with Prometheus and Grafana.

4. Practical Comparison – Three Scenarios, Three Choices

Scenario 1 – Simple cron script

Requirement: daily data sync with start/end/failure logs.

Choice: Loguru – one‑line config, colored output, automatic stack traces; performance is not critical.

from loguru import logger
logger.add("sync.log", rotation="30 days")

def main():
    logger.info("Sync started")
    try:
        do_sync()
    except Exception:
        logger.exception("Sync failed")
    logger.info("Sync completed")

if __name__ == "__main__":
    main()

Scenario 2 – High‑throughput API gateway

Requirement: 5 000+ RPS, request‑ID propagation, JSON output for ELK, zero latency impact.

Choice: Loguru + enqueue=True – non‑blocking queue, structured JSON, request‑ID binding.

logger.add("api.log",
    enqueue=True,
    serialize=True,
    rotation="500 MB",
    retention="7 days")

@app.middleware("http")
async def log_request(request: Request, call_next):
    request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))
    with logger.contextualize(request_id=request_id):
        response = await call_next(request)
    return response

Scenario 3 – Financial‑grade transaction system

Requirement: immutable audit logs, encrypted storage, minimal external dependencies.

Choice: Standard logging + custom Handler – stable, fully controllable, despite verbose setup.

5. Migration Guide

1. Replace calls

# Before
import logging
logger = logging.getLogger(__name__)
logger.info("Processing %s", order_id)

# After
from loguru import logger
logger.info("Processing {}", order_id)  # note format change

2. Bridge third‑party libraries

For libraries that use the standard logger (e.g., requests, urllib3), an intercept handler forwards records to Loguru:

import logging
from loguru import logger

class InterceptHandler(logging.Handler):
    def emit(self, record):
        level = logger.level(record.levelname).name
        logger.log(level, record.getMessage())

logging.basicConfig(handlers=[InterceptHandler()], level=0)

3. Gradual rollout

Start with edge services, verify stability, then expand. Loguru and logging can coexist during transition.

6. Pitfall Checklist

Pitfall 1 : Forgetting to remove the default stderr output leads to duplicate logs.

logger.remove()  # must run first

Pitfall 2 : Using enqueue=False in async code can block the event loop; enable enqueue=True for async workloads.

Pitfall 3 : Structured logs may inadvertently capture secrets. Implement a filter to mask passwords or tokens.

def mask_secrets(record):
    if "password" in record["message"]:
        record["message"] = record["message"].replace(record["extra"].get("password", ""), "***")
    return True

logger.add("app.log", filter=mask_secrets)

Conclusion

The author emphasizes that logs are the system’s “last will and testament”. Good logging tools change developer habits, turning logging from a pain point into a cultural asset that lets teams sleep soundly even when a 2 am incident occurs.

Key Takeaways

Standard logging : stable, zero‑dependency, best for large‑scale, compliance‑heavy systems. Loguru: developer‑friendly, concise, fits most Python projects. Logfire: observability‑first, ideal for distributed, cloud‑native architectures.

Choosing the right solution depends on whether you need simplicity, performance, or deep observability.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendPythonobservabilityloggingLoguruLogfire
Data STUDIO
Written by

Data STUDIO

Click to receive the "Python Study Handbook"; reply "benefit" in the chat to get it. Data STUDIO focuses on original data science articles, centered on Python, covering machine learning, data analysis, visualization, MySQL and other practical knowledge and project case studies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.