How EMER Revolutionizes Short‑Video Ranking with End‑to‑End Multi‑Objective Learning

This article details the EMER framework—a Transformer‑based, end‑to‑end multi‑objective ranking system that replaces handcrafted formulas with a learnable AI model, introduces relative‑satisfaction signals and dynamic loss weighting, and demonstrates significant offline and online performance gains in Kuaishou's short‑video recommendation pipeline.

Kuaishou Tech
Kuaishou Tech
Kuaishou Tech
How EMER Revolutionizes Short‑Video Ranking with End‑to‑End Multi‑Objective Learning

Background

When you open a short‑video app, every swipe is driven by a ranking logic that decides what you will see next. Traditionally, recommendation ranking relied on manually designed formulas that assign weights to signals such as likes or watch time, but this approach faces bottlenecks in personalization and multi‑objective trade‑offs.

EMER Framework

The Kuaishou strategy team designed an end‑to‑end multi‑objective ensemble ranking framework called EMER, which replaces the handcrafted formula with a learnable AI model. EMER treats ranking as a comparison problem and uses a Transformer‑based architecture to model relationships among all candidate items in a request.

Sample organization: pack all candidates (exposed or not) of a request into a single training sample, eliminating exposure bias.

Feature design: introduce Normalized Ranks to convey each item’s relative position within the candidate set.

Model architecture: a Transformer network captures pairwise interactions and outputs scores that reflect relative value.

EMER framework overview
EMER framework overview

User Satisfaction Modeling

Because user satisfaction is hard to define, EMER defines a hierarchical relative‑satisfaction signal (multiple positive feedback > single positive feedback > no feedback) and trains with a pairwise logistic loss. It also introduces multi‑dimensional proxy metrics to mitigate exposure bias and signal sparsity.

Relative satisfaction loss
Relative satisfaction loss

Self‑Evolving Optimization

EMER includes an Advantage Evaluator that dynamically adjusts loss weights for each objective based on the performance gap between the current and previous model, achieving automatic balancing of retention, watch time, and interaction metrics.

Self‑evolution weight adjustment
Self‑evolution weight adjustment

Offline and Online Results

AB tests on Kuaishou Fast Version and the main app show significant lifts: +0.302% 7‑day retention, +1.392% app stay time, and +1.044% short‑video views for the fast version; similar gains for the main app. EMER also improves consistency between offline GAUC and online metrics, and demonstrates robust performance across multiple proxy signals.

AB test results
AB test results

Conclusion

EMER demonstrates a practical, scalable solution for personalized multi‑objective ranking, moving from static formulas to AI‑driven self‑evolving models. The framework addresses three core challenges—undefined satisfaction signals, lack of comparative modeling, and multi‑objective conflict—providing a verifiable, industry‑ready approach for large‑scale recommendation systems.

AIrankingmulti-objective learningrecommendation systemsonline experiments
Kuaishou Tech
Written by

Kuaishou Tech

Official Kuaishou tech account, providing real-time updates on the latest Kuaishou technology practices.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.