Design and Practice of a Risk Control Experiment Platform at Du Xiaoman
This article explains the background, architecture, challenges, and step‑by‑step design of a big‑data‑driven risk control experiment platform used for online and offline strategy testing in internet finance.
Introduction: Big data risk control in internet finance has matured, enabling online financial services and requiring data‑driven rule and analysis systems. The risk control experiment platform provides an environment covering the entire strategy lifecycle.
Business background: The platform supports two experiment types—offline experiments for new or changed strategies to cover test scenarios, and small‑traffic online experiments to evaluate strategy changes with limited impact.
Architecture design: The system consists of three layers—business layer (pre‑loan, in‑loan, post‑loan units), platform layer (variable processing and decision modules), and data layer (integration of internal and external data for analysis and model training). The experiment platform sits within the decision module, linking online traffic to various experiment branches.
Overall architecture: Traffic from the decision platform flows to the online experiment layer, where small‑traffic tests run and results feed back to an OLAP system. Offline experiments use mirrored traffic and historical replay, with results also stored in OLAP for analysis.
Key challenges: (1) Determining statistical significance of small‑traffic results while conserving traffic; (2) Handling variable/feature iteration without “data leakage” in offline recalculations; (3) Minimizing performance overhead of experiment tagging on online decision latency; (4) Reducing long execution times for large‑scale offline experiments through elastic offline computation.
Design process for a risk experiment: (1) Sample preparation via variable back‑tracking; (2) Rule editing to implement new variables; (3) Offline experiments (historical, mirror, constructed) to validate rules; (4) Small‑traffic online experiments using T‑test for significance; (5) Full‑traffic rollout after confirming significant improvement.
The session was presented by senior technical expert Tan Linghang from Du Xiaoman and edited by Guo Zenghuang, with the content published on the DataFunTalk platform.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.