Challenges and Solutions for Automated Testing of a Transaction Middleware Platform
The article analyzes the difficulties of testing a large‑scale transaction middle platform—such as data diversity, volume, consistency, and lack of standards—and presents a rule‑driven, layered automation framework using Drools, jOOQ, and data‑objectification to dramatically improve testing efficiency and reliability.
In early 2020 the e‑commerce R&D team built a transaction middle platform that unified dozens of business lines through a master‑data‑centric approach, but testing faced challenges due to data variety, massive scale, consistency guarantees, and the absence of standardized integration.
Traditional automated testing struggled in this scenario because test case code was highly coupled, validation logic was scattered across many tests, and there was no capability to automatically compare full tables across multiple data sources such as business, sync, and master databases.
To address these issues the team upgraded the automation framework with three core ideas: (1) data‑driven automation where data itself carries rules (e.g., order amount formulas); (2) data‑behaviour automation where actions like create, query, cancel are treated as testable behaviors; (3) context‑aware validation rules that verify outcomes such as successful cancellation only when appropriate.
These designs bring two main benefits: a single rule definition can be reused across many test scenarios, reducing test‑case writing effort, and maintaining cohesive rule sets is easier than maintaining dispersed validation logic.
Because order data involves numerous rules, the team introduced a rule‑layering strategy—basic rule layer and differential/custom rule layer—and adopted the Drools rule engine to store core rules in .drl files, group them (basic, custom, differential), and trigger them flexibly during testing.
To simplify data handling, the framework uses the jOOQ library to auto‑generate entity classes for database access and incorporates interface‑specific JARs to deserialize API response objects, turning raw data into reusable objects for validation.
The overall technical stack and architecture are illustrated in the following diagrams:
The tool is applied in three main areas:
Data comparison testing: automatically query both sides of the data pipeline, compare fields, and report results, replacing manual Excel‑based checks that were time‑consuming.
Automated regression and rule validation: encapsulate core order logic (price, status, etc.) into templates that the tool validates across scenarios, enabling even inexperienced testers to run comprehensive regression tests.
Automated error‑data detection: periodically run rule‑based checks on recent orders to surface synchronization errors that previously relied on manual discovery.
Project outcomes include a dramatic increase in testing efficiency—regression tests that once took an hour now finish in minutes, and cross‑business data sync tests reduced from a full day to about 30 minutes—as well as faster error correction by proactively identifying faulty data.
Future improvements aim to integrate the framework with a front‑end automation platform for better management and to combine it with a precise diff engine that continuously refines test coverage based on automation feedback.
HomeTech
HomeTech tech sharing
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.