Stop Using print for Logs: In‑Depth Comparison of Python’s Three Major Logging Solutions

After a chaotic production incident, this article compares Python’s built‑in logging, Loguru, and Logfire, detailing their configurations, strengths, weaknesses, and real‑world use cases—from simple scripts to high‑throughput APIs—while offering migration steps and common pitfalls to help you choose the right solution.

Data Party THU
Data Party THU
Data Party THU
Stop Using print for Logs: In‑Depth Comparison of Python’s Three Major Logging Solutions

Introduction

In the middle of the night a production alert shows a jumble of logs: some have timestamps, some don’t; some use % formatting, others concatenate strings; stack traces are fragmented and request IDs are missing. The author, a ten‑year backend developer, explains why logging should not be this painful.

1. Standard Library logging

The "Swiss‑army knife" that can do everything but does none well.

Typical configuration code:

import logging
# create logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# create file handler
handler = logging.FileHandler("app.log")
formatter = logging.Formatter("%(asctime)s | %(levelname)s | %(name)s | %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)

def process_order(order_id):
    logger.info("Processing order %s", order_id)
    try:
        do_work(order_id)
    except Exception:
        logger.exception("Order failed")

Problems:

Verbose configuration : Every new project copies a 10‑line template and must decide between dictConfig or basicConfig, leading to inconsistent styles.

Context propagation pain : Adding a request_id requires LoggerAdapter or a custom Filter, often inflating code size (one project had 50 lines just for a single context field).

Exception logging errors : logger.exception must be called inside an except block; newcomers forget exc_info=True, losing stack traces.

Manual structured logging : To output JSON for ELK you must assemble dictionaries and handle nesting yourself.

⚠️ Note: 90% of users fall into the trap of using f‑strings with logger.info , which evaluates the string even when the log level is disabled, causing unnecessary performance loss.

Positioning : logging is the built‑in “Swiss‑army knife” – suitable for large enterprises, strict compliance, or environments where adding third‑party dependencies is impossible.

2. Loguru

Extreme developer experience.

First use was in a data‑cleaning script that has been running unchanged in production ever since.

1. Zero‑configuration start

from loguru import logger
logger.info("Service started")

Output automatically includes timestamp, level, file path, line number, and message, with colored levels for quick visual parsing.

2. File rotation in one line

logger.add(
    "logs/app.log",
    rotation="10 MB",      # rotate after 10 MB
    retention="7 days",    # keep 7 days
    level="INFO",
    format="{time} | {level} | {message}"
)

Compared with the standard library’s RotatingFileHandler and TimedRotatingFileHandler, Loguru’s declarative syntax reads like natural language.

3. Elegant exception handling

try:
    risky_call()
except Exception:
    logger.exception("Risky call failed")  # automatically captures full stack

No need to pass exc_info or format manually; the stack trace is complete and nicely colored.

4. Context binding, no parameter passing

# bind request_id
request_logger = logger.bind(request_id="req_123")
request_logger.info("Incoming request")  # automatically includes request_id

In FastAPI the author uses a middleware to generate a UUID per request and bind it to the logger, enabling a simple grep request_id to retrieve the entire request’s logs.

from fastapi import FastAPI, Request
from loguru import logger
import uuid
app = FastAPI()

@app.middleware("http")
async def add_request_id(request: Request, call_next):
    request_id = str(uuid.uuid4())
    request.state.logger = logger.bind(request_id=request_id)
    response = await call_next(request)
    return response

@app.get("/health")
async def health(request: Request):
    request.state.logger.info("Health check")  # automatically carries request_id
    return {"status": "ok"}

5. Async‑friendly, painless switch

import asyncio
from loguru import logger

async def worker(name):
    logger.info("Worker started {}", name)
    await asyncio.sleep(1)
    logger.info("Worker finished {}", name)

async def main():
    await asyncio.gather(worker("A"), worker("B"))

asyncio.run(main())

The output order remains correct; no interleaved log lines appear.

6. Non‑blocking logging (performance critical)

logger.add(
    "logs/app.log",
    enqueue=True,          # queue logs, write in background thread
    rotation="50 MB",
    level="INFO"
)

Benchmarks show that enqueue=True reduces latency spikes by over 60% under load, making it essential for high‑QPS services.

7. Structured logging with one‑line switch

logger.add(
    "logs/app.json",
    serialize=True,        # output JSON format
    level="INFO"
)
logger.info("User login", user_id=42, source="web")
# Output: {"time": "2026-03-21T10:00:00", "level": "INFO", "message": "User login", "user_id": 42, "source": "web"}

No manual dictionary construction; extra keyword arguments become JSON fields, ready for ELK, Loki, etc.

8. Production best practice

from loguru import logger
import sys

def setup_logging():
    # remove default stderr output
    logger.remove()
    # console output (development)
    logger.add(sys.stdout, level="INFO", format="{time} | {level} | {message}")
    # file output (production)
    logger.add(
        "logs/app.log",
        level="INFO",
        rotation="100 MB",
        retention="10 days",
        enqueue=True,
        compression="zip"  # compress old logs
    )

The function is under 15 lines, ten times clearer than a

logging
dictConfig

block.

Positioning : Loguru is a “developer‑first” logger that does the most with the least code, ideal for microservices, CLI tools, data pipelines, and internal systems where logging should not be a burden.

3. Logfire

When logging meets observability.

Logfire is not just a logger; it bridges code to an observability platform, automatically capturing structured data and sending it to OpenTelemetry, Grafana, Datadog, etc., while linking logs to metrics and traces.

Scenario: an API suddenly spikes in error rate.

Standard logging: grep ERROR on the server and manually correlate timestamps.

Loguru: open the log file and search for keywords.

Logfire: view the error‑rate curve on a dashboard, click a point, and instantly see the corresponding log, request trace, DB query latency, and even a CPU/memory snapshot.

Logfire fits distributed systems, microservice architectures, and SRE teams, especially when Kubernetes, Prometheus, and Grafana are already in use.

4. Practical Comparison: Three Scenarios, Three Choices

Scenario 1 – Simple cron script

Requirement: run a daily data sync, log start, end, and failure reasons.

Choice: Loguru

Reason: one‑line config, colored output, automatic stack traces; performance is not a concern.

from loguru import logger
logger.add("sync.log", rotation="30 days")

def main():
    logger.info("Sync started")
    try:
        do_sync()
    except Exception:
        logger.exception("Sync failed")
        raise
    logger.info("Sync completed")

if __name__ == "__main__":
    main()

Scenario 2 – High‑concurrency API gateway

Requirement: handle >5 000 req/s, include request_id, output JSON for ELK, and keep latency low.

Choice: Loguru + enqueue=True Reason: non‑blocking queue preserves performance; structured output simplifies search; context binding provides full‑traceability.

logger.add(
    "api.log",
    enqueue=True,
    serialize=True,
    rotation="500 MB",
    retention="7 days"
)

@app.middleware("http")
async def log_request(request, call_next):
    request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))
    with logger.contextualize(request_id=request_id):
        response = await call_next(request)
        return response

Scenario 3 – Financial‑grade transaction system

Requirement: compliance‑driven immutable logs, separate encrypted audit logs, minimal external dependencies.

Choice: Standard logging + custom handler

Reason: stable, zero dependencies, fine‑grained control; the configuration cost is justified by audit requirements.

5. Migration Guide

From logging to Loguru

1. Replace calls

# before
import logging
logger = logging.getLogger(__name__)
logger.info("Processing %s", order_id)

# after
from loguru import logger
logger.info("Processing {}", order_id)  # note format change

2. Bridge third‑party libraries

Some libraries (e.g., requests, urllib3) use the standard logging module. Intercept and forward their records to Loguru:

import logging
from loguru import logger

class InterceptHandler(logging.Handler):
    def emit(self, record):
        level = logger.level(record.levelname).name
        logger.log(level, record.getMessage())

logging.basicConfig(handlers=[InterceptHandler()], level=0)

3. Gradual replacement

Start with edge services, verify stability, then progressively replace core services. Loguru and logging can coexist.

6. Pitfall Guide

Pitfall 1: Forget to remove default stderr output

logger.remove()  # must call first, otherwise duplicate output

Pitfall 2: Using enqueue=False in async code

When many async tasks run, logging can block the event loop. Always enable enqueue=True for async workloads.

Pitfall 3: Sensitive information leakage

Structured logging automatically records extra fields; ensure passwords or tokens are never passed. Implement a filter to mask secrets:

def mask_secrets(record):
    if "password" in record["message"]:
        record["message"] = record["message"].replace(
            record["extra"].get("password", ""), "***"
        )
    return True

logger.add("app.log", filter=mask_secrets)

Core Recap

1. Standard logging : stable, controllable, zero dependencies – best for large enterprise systems with strict compliance. 2. Loguru: developer‑experience focused, concise configuration – fits most Python projects. 3. Logfire: next‑generation observability, ideal for distributed, cloud‑native architectures.

The key to choosing is not which tool is the most powerful, but which one lets you and your team write good logs without pain.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Pythonbackend developmentobservabilityloggingstructured loggingLoguruLogfire
Data Party THU
Written by

Data Party THU

Official platform of Tsinghua Big Data Research Center, sharing the team's latest research, teaching updates, and big data news.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.