Redesigning an Operation Log System: Architecture, Implementation Options, and Historical Data Migration

This article describes the challenges of scaling a multi‑system operation log platform, proposes a new unified log schema, compares non‑intrusive and intrusive collection approaches using Canal and AOP/annotations, and outlines a rule‑engine‑driven migration strategy for legacy log data.

政采云技术
政采云技术
政采云技术
Redesigning an Operation Log System: Architecture, Implementation Options, and Historical Data Migration

1. Scenario

In an online procurement ecosystem, numerous subsystems generate both operational flow logs and system logs; the former are user‑oriented and readable, while the latter serve developers for debugging and tracing.

Log Classification

System logs target developers with low readability and require specialized queries; operational logs target users and auditors, providing clear traceability of approval processes and outcomes.

Problems

Each subsystem maintains its own log table, leading to a single public log table that has become unmanageable as business scenarios grew and field usage diverged.

Main Refactoring Directions

To unify logging capabilities, the team plans a log‑system upgrade aligned with a departmental service‑splitting effort.

1) Table Structure Refactor

Add a redundant set of generic fields to accommodate three years of data growth.

2) Record and Retrieval Service Refactor

Introduce a starter package for front‑end components, supporting legacy code transformation while ensuring readability and concise expression of key information.

2. Implementation Approach

Two collection schemes were evaluated based on the amount of code changes required from business teams.

2.1 Non‑intrusive Scheme

Leverage the open‑source Canal component to subscribe to MySQL binlogs, infer business changes, and generate logs without any code modifications in the business services.

Limitations: only captures table‑level changes, cannot handle RPC calls or highly customized log content, thus rejected for the current needs.

2.2 Intrusive Scheme

Provide a starter that offers two collection methods: AOP + Annotation interception and hard‑coded component injection.

The starter includes a log‑template mechanism supporting Spring Expression Language (SpEL) placeholders that resolve method arguments, return values, and custom functions.

Example annotation:

@LogRecord(
        success = "执行签到操作,修改 tag 为:#{#model.getTag()};ids 为:#{T(java.lang.String).join(',',#model.getIds())}",
        bizId = "#{#model.getBizId()}",
        childBizId = "#{#_resource.getChildBizId()}",
        context = "#{@demoService.getContext(#_resource.getContextIndex())}",
        identityType = "#{#model.getIdentity()}"
)
@PostMapping("full")
public ResponseModel fullDemo(@RequestBody RequestModel model) {
    //执行业务逻辑...
    ResponseModel responseModel = new ResponseModel();
    responseModel.setCode("200");
    responseModel.setRequestModel(model);
    responseModel.setChildBizId("biz_001");
    responseModel.setContextIndex("stash");
    return responseModel;
}

AOP interception flow is illustrated in the following diagram:

Custom annotation interception is simple to integrate but becomes complex when handling advanced templates such as hyperlinks, keyword masking, or snapshots, which may require hard‑coded logic inside the method.

3. Historical Data Migration

Because the new log schema differs from the old one, a smooth migration is impossible; fields must be aggregated, remapped, and combined with files or rich text.

A lightweight rule engine (QLExpress) is used to define mapping and macro rules, allowing business owners to clean and transform their own data.

Migration proceeds per‑scenario ID, with each scenario’s field mapping expressed as an expression; complex file/link rendering uses a pipeline‑style responsibility chain.

Sample chain implementation:

public class ChainContext<T, V> {
    /** Store handler */
    AbstractHandler<T, V> handler;
    /** Next node */
    ChainContext<T, V> next;
    /** Execute pipeline */
    public void fireChainRun(DataVector<T, V> arg) {
        handler.invoke(arg);
        if (next != null) {
            next.fireChainRun(arg);
        }
    }
    // getters and setters omitted for brevity
}

4. Summary

The article presents a comprehensive redesign of a departmental logging component, covering scenario analysis, log‑collection strategies, schema evolution, and a rule‑engine‑driven migration path, highlighting the increasing complexity of seemingly simple logging requirements.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

data migrationJavamicroservicesAOPlogging
政采云技术
Written by

政采云技术

ZCY Technology Team (Zero), based in Hangzhou, is a growth-oriented team passionate about technology and craftsmanship. With around 500 members, we are building comprehensive engineering, project management, and talent development systems. We are committed to innovation and creating a cloud service ecosystem for government and enterprise procurement. We look forward to your joining us.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.