Understanding UC International Feed Recommendation: Goal Determination, Multi‑Objective Estimation, and Mixed Ranking
This article explains how UC international feed recommendation tackles goal definition, multi‑objective point estimation using models such as ESMM, DBMTL and MMoE, mixed‑ranking optimization, and cold‑start challenges by leveraging content understanding and feature generalization to improve user satisfaction.
The talk, presented by senior algorithm expert Jie Xiong from Alibaba, introduces the theme “A Brief Overview of UC International Feed Recommendation” and focuses on two main scenarios: list‑page sorting (including goal definition, multi‑objective tasks, and mixed‑ranking optimization) and content cold‑start problems.
List‑page Recommendation
The list page contains heterogeneous cards such as news aggregation, video immersion pages, standard article landing pages, and region‑specific Memes that can be consumed directly on the list.
Content consumption paths are categorized into three types: direct consumption on the list (e.g., Memes), consumption on a landing page, and consumption after navigating from an aggregation page to a landing page.
1. Goal Determination
The core of a recommendation system is to define the right objective. User behavior is modeled as a series of circles (actions) and squares (psychological states). Positive paths include attraction → click → effective reading → interaction (share, like, comment). Negative paths include quick back, unsatisfied clicks, or no‑click exposures. The overall goal is to maximize the cumulative satisfied behaviors while minimizing negative signals.
Effective reading is not linearly related to dwell time; short quick‑backs are treated as completely unsatisfied, and long reads exhibit a sigmoid‑like satisfaction curve. Thus, satisfaction is modeled as a classification problem rather than pure regression.
2. Multi‑Objective Point Estimation
Several internal Alibaba solutions are presented:
ESMM – treats the whole sample space as the training space, sharing parameters between CTR and CVR, and computes loss over all samples.
DBMTL – improves ESMM by adding a Bayesian layer that captures causal relationships between objectives.
MMoE – a mixture‑of‑experts architecture where each expert can have different features and structures, and a gating network assigns weights for multi‑task voting.
These models have yielded positive gains in video and push scenarios.
3. Mixed Ranking
In list‑page scenarios the problem becomes a combinatorial optimization: given N candidates, select M slots to maximize a joint objective. Exact search is infeasible, so a greedy algorithm with beam search (keeping top‑k candidates at each step) is used. Contextual information and previous ordering are modeled with an RNN that updates a hidden state for each step, enabling efficient mixed‑ranking.
Content Cold‑Start
When only item IDs are used as features, new content suffers from severe cold‑start, requiring thousands of impressions to converge. Adding textual features and time‑bias (e.g., example age) improves early performance. Aligning multilingual text embeddings further mitigates cold‑start for low‑resource languages.
Conclusion
The presentation summarized how to define objectives, perform multi‑objective point estimation, and conduct mixed‑ranking optimization, and offered practical ideas for cold‑start mitigation through feature generalization and semantic alignment.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.