How Huolala Accelerated Risk‑Control Testing with Automated Tools
This article details Huolala's challenges in risk‑control testing amid rapid business growth, outlines the inefficiencies of manual configuration verification, and explains how a suite of automated tools and a full‑scope interception strategy dramatically improved testing efficiency, data quality assurance, and cross‑team collaboration.
1. The "Hard‑March" of Risk‑Control Demand Deployment
1.1 Responsibilities of Risk‑Control Testing
At Huolala, testing participates in the entire lifecycle of demand to ensure product quality, safeguarding code changes without introducing new issues while confirming that changes meet product requirements.
Beyond code changes, risk‑control systems rely heavily on configuration and upstream data quality; any errors lead to misjudgments, customer complaints, financial loss, and public‑opinion risk. Consequently, even when no code changes occur, the risk‑control testing team must invest heavily to verify configuration correctness and upstream data accuracy.
Because of the specialized nature of risk‑control, development and testing often encounter obstacles, and the risk‑control testing team frequently assists other teams to resolve these issues.
1.2 Quality and Efficiency Problems in Risk‑Control Demand Launch
When business systems integrate risk‑control capabilities, business testing must verify both hit and miss scenarios, while the risk‑control system must correctly parse messages and ensure data quality. However, due to the system's characteristics and domain expertise, the launch process resembles a "hard march": even simple integration demands consume substantial resources.
Business testing cannot independently construct hit scenarios and needs risk‑control testing support. Feature data originates from upstream cleaning and enrichment, which business testers cannot fully understand.
The risk‑control system cannot guarantee upstream data accuracy; testing must verify it.
Risk‑control products cannot ensure configuration correctness without testing validation.
Online testing can be mistakenly blocked by the risk‑control system, requiring testing assistance.
1.3 Thoughts and Exploration for Quality‑Efficiency Improvement
To balance quality assurance with cost reduction and delivery speed, the following problems need solving:
Difficulty constructing risk scenarios: Provide a scenario‑generation tool and testing strategy to create hit and miss cases with a single identifier.
High data‑verification cost: Embed data‑verification rules in test strategies to reduce manual checks.
Online verification blockage: Use tracing capabilities, white‑lists, and testing support to avoid interruptions.
High testing investment: Optimize the demand‑launch workflow, shift testing left, and replace manual effort with tools.
2. Moving Testing Forward Increases Efficiency Significantly
2.1 Pain Points of Risk‑Control Scenario Testing
After a request reaches the risk‑control system, fields are parsed, cleaned, and enriched into features, which are then evaluated against rules to produce a decision.
Business testing often faces three issues:
Uncertainty about which risk interfaces will be invoked and which strategies will be hit.
Lack of knowledge on how to construct data that triggers risk.
Unclear how to release a request from risk after testing.
To reduce communication cost and improve efficiency, we developed two tools:
Risk‑Hit Retrieval Tool: Quickly locate invoked risk interfaces and hit strategies.
Risk‑Scenario Construction Tool: Accelerate data creation for risk scenarios.
2.2 Risk‑Hit Retrieval Reduces Communication Overhead
Because the risk system only stores messages that hit a strategy, testers must search logs to locate triggers, which is inefficient. By deploying a constant‑hit policy in the test environment, all traffic is recorded, enabling fast retrieval of hit information via identifiers and time ranges.
Retrieval results are shown below:
2.3 Scenario‑Generation Tool Boosts Data Creation Efficiency
The tool simplifies scenario testing and data‑quality verification through three steps:
Data compliance check: Users select a desired hit scenario and provide a unique identifier (e.g., driver ID); the tool validates compliance.
Feature injection: Once compliant, the identifier is injected into the risk blacklist for automatic hit.
Data‑quality verification: The tool combines generated, business, and parameter‑validation features into a test strategy, reducing manual checks.
2.4 Risk‑Control Demand Testing Leads the Way
By adopting a left‑shift testing approach, the risk‑control team builds tools during development, allowing business testing to operate independently during the testing phase, thus reducing bottlenecks and cutting testing investment.
2.5 Limitations of Quality Assurance
The scenario‑generation tool requires proactive identification of risk‑related changes by development and testing teams; otherwise, data‑quality issues may slip into production.
To address this, a proactive data‑quality interception capability is needed in the test environment.
3. Global Risk‑Control: End‑to‑End Data‑Quality Assurance
Previous scenario‑based verification struggled with undetected upstream changes, leading to data‑quality problems in risk‑control. Testing only covered partial traffic and lacked a full‑scale interception method.
By combining full‑traffic top‑policy recording with scenario verification, we extend to a global interception strategy that checks all traffic.
3.1 Global Interception Strategy
In the test environment, a full‑traffic control policy validates each field against interface requirements, returning interception codes and notifying testers of errors, turning passive blocking into active discovery.
The process consists of three steps:
Define global risk policy: Classify features by importance and configure interception rules on the risk platform.
Implement interception: The risk system blocks non‑compliant traffic and notifies the feature‑testing platform.
Track and resolve issues: Testers create follow‑up tickets, assess data‑quality problems, and address them promptly.
3.2 Early‑Stage Data Governance
Excessive noise in test traffic, historical field loss, and inconsistent stakeholder awareness hinder effective interception. Early data governance reduces noise, resolves missing fields, and aligns expectations.
3.3 Discovering Data Issues
After governance, thousands of flows are intercepted daily. By clustering high‑similarity traffic (e.g., from automated tests) using unique identifiers, we lower manual review costs.
We also add noisy flow signatures to Elasticsearch for automatic detection and labeling, further reducing discovery effort.
4. Risk‑Control No Longer Blocks Online Verification
Testing accounts are often flagged as fraudulent by the risk system, limiting their use. Adding accounts to a risk‑control whitelist mitigates this, but discovery often occurs late, incurring extra coordination cost and risk of stale whitelists.
4.1 Tool‑Usage Tracking
When business testers use scenario‑generation tools, usage records and user info are sent back to the risk‑control platform, which aggregates them and pushes tickets to tool users. Users fill demand details and receive reminders to whitelist accounts before launch.
4.2 Online Whitelist Management
Strict periodic inspections and controls prevent non‑test accounts or departed personnel from remaining on the whitelist, reducing unexpected risk.
5. Looking Ahead
Automation and cross‑team collaboration have cut the human effort required for risk‑control testing by a third, allowing the team to focus on high‑value work. Future directions include AI‑driven scenario identification, enhanced tokenization for noise reduction, and smarter cross‑team tools.
Intelligent: Apply AI for automatic link analysis and scenario recommendation.
Efficient: Continuously improve token detection to better spot noisy traffic.
Service‑Oriented: Build more intelligent tools and collaboration mechanisms to serve business testing and development.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
