Operations 12 min read

Design and Implementation of a Business Operation Log Management System Using Canal and Elasticsearch

The article presents a decoupled business operation log management architecture that uses Alibaba’s Canal to capture MySQL binlog changes, streams them through Kafka, and stores structured before‑and‑after records in Elasticsearch with nested mappings, enabling multi‑table correlation via transaction IDs, visual querying, and reliable rollback without modifying application code.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Design and Implementation of a Business Operation Log Management System Using Canal and Elasticsearch

In daily business systems, operation logs are essential for tracking user actions, recording data changes, and supporting rollback when needed. A client requested a comprehensive business operation log management feature that records user behavior, provides a visual query page, and supports error‑operation rollback.

The article distinguishes two main log types: system logs (debug, info, warn, error) used by developers and operators, and operation logs that capture actual business actions (e.g., which user clicked which menu) and are stored in a database for administrators.

Traditional implementations—embedding log statements directly in business code or using AOP—have significant drawbacks: they require extensive code changes, cannot easily capture before/after values for batch or multi‑table operations, and struggle with complex scenarios.

To overcome these issues, the solution adopts Canal , an open‑source component from Alibaba that subscribes to MySQL binary logs (Binlog). Canal captures real‑time data change events (INSERT, UPDATE, DELETE) at the database layer, decoupling logging from business code and supporting batch and multi‑table operations.

Advantages of Canal include: decoupling business code, handling bulk and multi‑table changes, and being language‑agnostic. Its disadvantages are the need for consistent table design, potential compatibility issues with complex cascades, and the necessity to filter non‑business changes.

Data parsing and conversion : Canal reads Binlog from the business database, parses it, and pushes the structured change events to Kafka. An example of the parsed JSON payload is:

{
  "data": [{
    "id": "122158992930664499",
    "goodsName": "新商品名称",
    "update_time": "2020-08-26 13:45:46"
  }],
  "old": [{
    "goodsName": "旧商品名称",
    "update_time": "2020-08-26 09:15:13"
  }],
  "database": "db_business",
  "table": "goods",
  "type": "UPDATE",
  "ts": 1587879945698
}

A generic interface is defined to return both the new and old data along with field descriptions. An example request/response structure is:

{
  "id": "10001",
  "groupID": 1700,
  "system": "01",
  "newObject": {
    "goodsName": "商品名称001",
    "goodsCode": "商品编码001"
  },
  "oldObject": {
    "goodsName": "商品名称",
    "goodsCode": "商品编码"
  },
  "fieldsDescription": {
    "goodsID": "商品ID",
    "goodsName": "商品名称",
    "goodsCode": "商品编码"
  },
  "action": 2,
  "description": "修改商品信息",
  "operator": "user001",
  "databaseName": "db_business",
  "tableName": "goods",
  "module": "商品管理",
  "txID": "36aef98585db4e7a98f9694c8ef28b8c",
  "timestamp": 1587879945698
}

The field explanations include identifiers such as groupID , databaseName , tableName , oldObject , newObject , fieldsDescription , operator , module , action (0‑add, 1‑delete, 2‑update), description , and txID for transaction aggregation.

For storage, a NoSQL solution—Elasticsearch—is used, with monthly indices per business line. The index mapping (simplified) is:

PUT goods-nested
{
  "mappings": {
    "properties": {
      "id": {"type": "integer"},
      "groupID": {"type": "integer"},
      "bizSource": {"type": "keyword"},
      "action": {"type": "integer"},
      "description": {"type": "keyword"},
      "operator": {"type": "keyword"},
      "databaseName": {"type": "keyword"},
      "tableName": {"type": "keyword"},
      "bizmodule": {"type": "keyword"},
      "txId": {"type": "keyword"},
      "newObject": {"type": "nested", "properties": {"goodsID": {"type": "integer"}, "goodsName": {"type": "keyword"}, "goodsCode": {"type": "keyword"}}},
      "oldObject": {"type": "nested", "properties": {"goodsID": {"type": "integer"}, "goodsName": {"type": "keyword"}, "goodsCode": {"type": "keyword"}}},
      "fieldsDescription": {"type": "nested", "properties": {"goodsID": {"type": "integer"}, "goodsName": {"type": "keyword"}, "goodsCode": {"type": "keyword"}}}
    }
  }
}

Sample bulk insertion of operation logs:

POST goods-nested/_bulk
{ "index": { "_index": "goods-nested", "_id": "10001" } }
{ "id": "10001", "groupID": 1700, "bizSource": "Scm", "newObject": {"goodsID": 1001, "goodsName": "商品名称001", "goodsCode": "商品编码001"}, "oldObject": {"goodsID": 1001, "goodsName": "商品名称", "goodsCode": "商品编码"}, "fieldsDescription": {"goodsName": "商品名称", "goodsCode": "商品编码"}, "action": 2, "description": "修改集团品相", "operator": "001", "databaseName": "db_supply_chain_basic", "tableName": "tbl_chain_distribution", "bizmodule": "集团品相", "txId": "36aef98585db4e7a98f9694c8ef28b8c" }
... (additional documents omitted for brevity)

Querying logs by product ID using a nested query:

GET goods-nested/_search
{
  "query": {
    "nested": {
      "path": "newObject",
      "query": {
        "bool": {
          "must": [
            { "match": { "newObject.goodsID": "1001" } }
          ]
        }
      }
    }
  }
}

To handle multi‑table operations, the transaction ID ( txID ) is used to aggregate related changes, enabling consistent rollback. Non‑business changes (e.g., database maintenance) are filtered out to keep the log focused on actual user actions.

In conclusion, the proposed architecture—Canal for change capture, Kafka for transport, and Elasticsearch for storage and search—provides a decoupled, scalable, and language‑agnostic solution for business operation log management, while acknowledging challenges such as multi‑table correlation and accurate operator recording.

backendData PipelineElasticsearchKafkaMySQL binlogCanaloperation log
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.