Big Data 23 min read

Next‑Generation AB Experiment Analysis Engine for Multi‑Sided Scenarios

To overcome the limitations of traditional A/B engines in multi‑sided, small‑sample and spill‑over contexts, the article proposes a next‑generation analysis engine that standardizes adaptive workflows, automates method selection, integrates variance‑reduction and meta‑analysis techniques, and offers a modular, self‑service platform for robust, scalable experimentation.

Meituan Technology Team
Meituan Technology Team
Meituan Technology Team
Next‑Generation AB Experiment Analysis Engine for Multi‑Sided Scenarios

Traditional experiment engines rely on a single experimental unit, ordinary random grouping, large samples, and independent individuals, which works for single‑sided scenarios but fails in multi‑sided contexts where spill‑over effects and small samples are common. This article proposes a new generation AB experiment analysis engine to guide platform builders in addressing these challenges.

AB testing has evolved from purely single‑sided experiments to a coexistence of single‑ and multi‑sided scenarios, driven by the rise of O2O platforms and strong LBS attributes that increase experimental complexity.

Conventional engines struggle with multi‑sided experiments because they cannot handle small samples and spill‑over effects, leading to statistical errors, variance mis‑calculations, and reduced robustness.

The need for a trustworthy and efficient solution motivates the design of a next‑generation engine that standardizes analysis, automates pipelines, and provides a central method library for internal use and a practical guide for external practitioners.

Experimenters face dilemmas: building reliable A/B platforms in logistics multi‑sided scenarios, mismatched team skill sets (engineers lacking statistical expertise), and engines that support only a narrow set of experimental designs, causing robustness and scalability issues.

Our approach introduces a standardized, adaptive analysis workflow that automatically selects appropriate methods based on experimental context, ensuring consistency, robustness, and sensitivity. The pipeline includes data reliability validation, preprocessing, effect estimation, variance estimation, p‑value calculation, and report generation.

The central method library integrates best practices such as variance‑reduction techniques, covariate‑adaptive grouping for small samples, meta‑analysis for leveraging historical results, and solutions for spill‑over effects, covering both single‑ and multi‑sided use cases.

Decoupling the analysis engine from underlying infrastructure allows specialized teams (engineers, data scientists, data‑warehouse experts) to focus on their domains, accelerating iteration and improving platform stability.

Key engine features include multi‑method support for diverse experimental designs, zero‑cost integration via a modular pipeline, and a self‑service UI that automates outlier handling and experiment‑period adjustments.

Advanced techniques such as covariate‑adaptive grouping enable reliable grouping with as few as ten samples, while meta‑analysis combines historical experiments to boost statistical power. The modular design facilitates easy integration into any experiment platform.

In conclusion, the new engine standardizes experiment analysis, shares best practices across teams, and will continue to incorporate emerging methods for spill‑over mitigation and observational studies, further enhancing experiment reliability and efficiency.

AB testingplatform engineeringstatistical methodsExperiment analysismulti-sided experiments
Meituan Technology Team
Written by

Meituan Technology Team

Over 10,000 engineers powering China’s leading lifestyle services e‑commerce platform. Supporting hundreds of millions of consumers, millions of merchants across 2,000+ industries. This is the public channel for the tech teams behind Meituan, Dianping, Meituan Waimai, Meituan Select, and related services.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.