Fundamentals 14 min read

How Meituan’s Trusted Experiment Engine Enables Zero‑Barrier A/B Testing

The article introduces Meituan’s trusted experiment analysis engine, detailing its rich methodological library, system architecture, integration options, and a step‑by‑step offline analysis case that together empower teams to conduct reliable, efficient A/B tests without deep statistical expertise.

Meituan Technology Team
Meituan Technology Team
Meituan Technology Team
How Meituan’s Trusted Experiment Engine Enables Zero‑Barrier A/B Testing

Introduction

This final chapter of the "Trusted Experiment Whitepaper" series shares Meituan’s practice of an AB experiment analysis library and provides a PDF collection of the series for practitioners.

Product Features

Rich Experiment Methods : Covers randomised controlled trials, randomised rotation experiments, quasi‑experiments, and observational studies, offering over 11 experiment methods, 7 grouping methods, and 10 hypothesis‑testing methods, including small‑sample solutions like covariate‑adaptive grouping, rotation experiments, double‑difference, and synthetic control.

Ease of Use : Standardised request parameters allow the engine to automatically select the most appropriate test based on method, metric type, and sample distribution, handling data preprocessing, effect estimation, variance and p‑value calculation.

High Performance : Utilises vectorised and parallel computation; randomised controlled trials support distributed processing, enabling analysis of billions of records within minutes.

Multiple‑Comparison Correction : Automatically adjusts for multiple comparisons to control Type‑I error across many groups and metrics.

Power Enhancement : Supports CUPED variance reduction (single‑coefficient, double‑coefficient, new CUPED) to increase test sensitivity.

Integrated Analysis : Allows combined analysis of independent experiments to boost statistical power, using sample weighting and inverse‑variance weighting.

Power Calculation : Provides minimum sample size, MDE calculations, and post‑experiment diagnostics to determine whether failures are due to insufficient data or ineffective strategies.

System Design

The engine follows a modular, layered architecture:

Application Layer : Entry points such as the Turing experiment platform and Python SDK for offline analysis.

Interface Layer : Standardised APIs abstract experiment design and evaluation parameters, enhancing extensibility.

Routing Layer : Routes requests to appropriate analysis templates; for large‑scale randomised controlled trials, key aggregation operators (covariance, variance, mean) are executed on a PySpark‑based distributed engine.

Data Preparation Layer : Handles data loading (single‑node via pandas, distributed via HDFS/Hive), preprocessing (null filling, type conversion, outlier removal, metric completion), and secondary metric computation.

Analysis Method Layer : Core library managed by data scientists, encompassing experiment grouping, hypothesis testing, power techniques, and sample‑size estimation; all significance tests undergo AA simulation validation.

System Integration

The analysis engine is open to all Meituan internal teams, offering various integration methods such as API calls, third‑party platform connectors, and the Python SDK for offline analysis.

Offline Analysis Case Study

A randomised controlled experiment for a fulfillment algorithm compares strategies across selected cities, using order volume, completed orders, and per‑user order rate as metrics. The experiment design includes three groups with a 2:3:5 traffic split and applies CUPED for variance reduction.

Step 01 : Install the offline analysis SDK and import AbAnalyzeClient and related classes.

Step 02 : Define analysis parameters (dataset, grouping, metrics) and set extArgs to specify delta variance estimation and binary‑coefficient CUPED.

Step 03 : Submit the analysis request; the system automatically retries if grouping heterogeneity is detected.

Step 04 : Generate the design report via show_report, reviewing homogeneity test results and deciding on experiment launch based on MDE and p‑value.

Summary and Outlook

The whitepaper consolidates Meituan’s experiment practices across fulfillment and delivery, covering four major experiment categories and advanced tools, and provides a practical guide to the analysis engine. Future work will track methodological advances, expand trusted experiment routing and computation architectures, and scale experiment capabilities across the organization.

Acknowledgments

Thanks are extended to the Meituan fulfillment and delivery data‑science teams, authors, and all supporting departments for their contributions to the whitepaper.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

platform engineeringstatistical methodsdata scienceexperiment analysis
Meituan Technology Team
Written by

Meituan Technology Team

Over 10,000 engineers powering China’s leading lifestyle services e‑commerce platform. Supporting hundreds of millions of consumers, millions of merchants across 2,000+ industries. This is the public channel for the tech teams behind Meituan, Dianping, Meituan Waimai, Meituan Select, and related services.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.