Adapting goInception for OceanBase MySQL: Full Guide with Python Integration
This article walks through the challenges of using goInception with OceanBase‑MySQL tenants, details the code modifications required to parse OB's Query Plan, shows how to expose the enhanced engine via a Python wrapper, and presents validation results demonstrating accurate, low‑latency SQL auditing.
Business Pain Point
In large‑scale OceanBase (OB) deployments, the database work‑order platform relies on goInception for SQL review, but the open‑source tool cannot extract the estimated row count (est_rows) from OB‑MySQL's distributed execution plan, causing high‑risk DML operations to slip through and increasing operational risk.
Core Idea
The solution involves three steps: problem diagnosis, technical refactoring of goInception, and end‑to‑end validation, enabling automatic SQL audit for OB‑MySQL tenants.
Technical Refactoring
The modifications target goInception version 1.3.0 and consist of three main parts:
Extend the ExplainInfo struct to store OB‑specific execution plan text.
type ExplainInfo struct {
SelectType string `gorm:"Column:select_type"`
Table string `gorm:"Column:table"`
// ... other fields ...
EstRows string `gorm:"Column:estRows"`
ObPlan sql.NullString `gorm:"Column:Query Plan"`
}Enhance getExplainInfo to handle multiple parsing branches (MySQL/TiDB logic plus OB logic). The added checks insert two conditional blocks:
if row.ObPlan.Valid {
row.Rows = ObRowAffect(row.ObPlan)
}Implement ObRowAffect to parse the textual Query Plan returned by OB, extract the maximum estimated row count, and return it as an int64.
func ObRowAffect(plan sql.NullString) int64 {
if !plan.Valid { return 0 }
r := strings.NewReader(plan.String)
br := bufio.NewReader(r)
estrows := []string{}
for {
l, e := br.ReadString('
')
if e != nil && len(l) == 0 { break }
if strings.HasPrefix(l, "|") {
parts := strings.Split(l, "|")
estrows = append(estrows, strings.TrimSpace(parts[4]))
}
}
var max int
for i := 1; i < len(estrows); i++ {
if v, err := strconv.Atoi(estrows[i]); err == nil {
if v > max { max = v }
}
}
return int64(max)
}
func max(a, b int) int { if a > b { return a }; return b }Python Integration
After recompiling goInception with the above changes, a Python wrapper class is provided to invoke the tool from a Python‑based work‑order system.
from app.common.utils.db_conn.mysql_conn import OpenMysqlDb
class GoInception:
def __init__(self):
self.go_inception_host = "localhost"
self.go_inception_user = "root"
self.go_inception_password = ""
self.go_inception_port = 4000
self.go_inception_db_name = ""
self.commit = False
def check_sql(self, host, user, password, port, database, sqls):
sql = f"""/*--host='{host}';--port={port};--user={user};--password='{password}';--check=1;max_insert_rows=10;*/
inception_magic_start;
use `{database}`;
{sqls};
inception_magic_commit;"""
with OpenMysqlDb(host=self.go_inception_host, user=self.go_inception_user,
port=self.go_inception_port, password=self.go_inception_password,
db_name=self.go_inception_db_name, commit=self.commit) as conn:
conn.ping()
return conn.db_query(sql=sql)
def execute_sql(self, host, user, password, port, database, sqls, backup=0, ignore_warnings=0, fingerprint=0):
sql = f"""/*--host='{host}';--port={port};--user='{user}';--password='{password}';--execute=1;backup={backup};ignore_warnings={ignore_warnings};fingerprint={fingerprint};*/
inception_magic_start;
use `{database}`;
{sqls};
inception_magic_commit;"""
with OpenMysqlDb(host=self.go_inception_host, user=self.go_inception_user,
port=self.go_inception_port, password=self.go_inception_password,
db_name=self.go_inception_db_name, commit=self.commit) as conn:
conn.ping()
r = conn.db_query(sql=sql)
return rValidation and Results
The end‑to‑end workflow is:
Work order → Identify OB‑MySQL tenant → Call Python interface → Pass parameters → go‑inception parses Query Plan → Extract est_rows → Threshold check → Return audit result → Display or reject in work order systemTypical scenario tests show:
For an OB tenant update of 6000 rows, the extracted est_rows exceeds the 5000‑row threshold, and the order is rejected as “exceeds large‑table limit”.
For a native MySQL update of 2000 rows, the original logic applies, the threshold is not exceeded, and the order passes with “no risk”.
Key metrics:
Accuracy: 100% extraction rate for OB‑MySQL est_rows.
Performance: Each audit takes ≤20 ms, supporting >100 concurrent audits per second.
Conclusion and Extensions
The implemented “low‑level magic‑mod + high‑level wrapper” delivers a non‑intrusive, zero‑perception integration that preserves existing workflows while adding robust support for OB‑MySQL’s distributed execution plans. This approach can be extended to other OB‑compatible tenants or similar distributed databases.
dbaplus Community
Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
