Artificial Intelligence 21 min read

Multi‑Objective Ranking in Kuaishou Short‑Video Recommendation: System Design and Online Results

This article details Kuaishou's multi‑objective ranking pipeline for short‑video recommendation, covering manual score fusion, GBDT ensemble, Learn‑to‑Rank, online auto‑tuning, ensemble sorting, reinforcement‑learning rerank, and on‑device rerank, and reports their impact on DAU, watch time and user interaction.

DataFunTalk
DataFunTalk
DataFunTalk
Multi‑Objective Ranking in Kuaishou Short‑Video Recommendation: System Design and Online Results

1. Kuaishou Short‑Video Recommendation Overview

Kuaishou serves over 300 million daily active users across four main feed pages (Discovery, Follow, City, and Social), where recommendation algorithms dominate traffic and directly affect user experience, DAU and overall app duration.

2. Ranking Objectives

The system optimizes multiple feedback signals—implicit positive (play time, completion), explicit positive (likes, follows, shares), implicit negative (short plays, session termination) and explicit negative (dislike, report)—to increase positive feedback, reduce negative feedback and improve user satisfaction.

3. Multi‑Objective Fine‑Ranking

Stage 1: Manual Fusion – a hand‑crafted linear combination of predicted metrics, simple but inflexible.

Stage 2: Tree‑Model Ensemble – a GBDT model incorporates pXtr, user portrait and statistical features, using a weighted LogLoss to fit a combined label; this improves short‑video watch time by 4.5 % on the City page.

Stage 3: Learn‑to‑Rank (Hyper‑parameter LTR) – a dual‑tower DNN concatenates 24 predicted values from the fine‑ranking model; the network learns a weighted combination of these signals, yielding a 0.2 % app‑time lift.

4. End‑to‑End Learn‑to‑Rank

Two approaches are explored on the top‑6 candidates returned by the fine‑ranker:

Pointwise: raw features (user ID, behavior sequence, pXtr) feed a DNN with attention, improving app time by 0.6 % and interactions by 2‑4 %.

Pairwise: construct preference pairs for each objective, train a DNN with sigmoid output and weighted cross‑entropy, achieving stable app time and 2‑7 % interaction gains.

5. Complex Multi‑Objective Strategies

Ensemble Sort – linear weighting of diverse scoring logics (user interaction, external utility, sharing benefit) is normalized by rank‑based transformation to align scales, delivering a 0.6 % app‑time increase and 2‑3 % interaction lift.

Online Auto‑Tuning – parameters are optimized online using CEM/ES/Bayesian methods on a 5 % traffic bucket; rewards combine revenue items (watch time, stay time) and constraint items (likes, follows) with nonlinear decay. Iterative sampling reduces noise and converges to better multi‑objective weights, yielding a 0.5 % app‑time gain.

6. Reranking

Listwise Rerank – a transformer encoder models interactions among the top‑6 videos, improving AUC across positions and adding 0.3 % app‑time.

Reinforcement‑Learning Rerank – a policy‑gradient LSTM selects videos sequentially from the top‑50, optimizing relevance, diversity and constraints; online results show +0.4 % app‑time and +0.4 pp new‑device retention.

On‑Device Rerank – a lightweight TF‑Lite model runs on the client, ingesting richer user, video and real‑time feedback features; it selects the next video in real time, increasing app duration by 1 %, reducing server QPS by 13 % and boosting interaction metrics.

7. Future Directions

Planned work includes addressing gradient conflicts in multi‑task learning, dynamic Pareto‑optimal weight adjustment, upgrading MM‑OE experts, improving sparse‑reward handling, exploring higher‑order optimization for online tuning, and enhancing rerank with beam search and better relevance‑diversity trade‑offs.

machine learningrecommendationreinforcement learningKuaishoumulti-objective rankingonline tuning
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.