Artificial Intelligence 13 min read

Advances in Ranking Algorithms for the "Good Goods" Recommendation Scenario

This article presents a comprehensive overview of recent advancements in ranking algorithms for the Good Goods recommendation scenario, covering long‑sequence modeling, category‑retrieval attention, multi‑objective ranking, model structure optimizations, loss functions, and LTR techniques, along with experimental results and practical insights.

DataFunTalk
DataFunTalk
DataFunTalk
Advances in Ranking Algorithms for the "Good Goods" Recommendation Scenario

The article introduces the Good Goods recommendation scenario, where ranking is a critical component for improving recommendation efficiency, and outlines a year‑long series of iterations in long‑sequence modeling, multi‑objective ranking, model structure optimization, loss optimization, and LTR.

Long Sequence Modeling: To handle ultra‑long user interest sequences, the authors propose sub‑sequence extraction combined with attention and multiple mean‑pooling strategies, as well as a category‑retrieval sequence + attention mechanism that filters relevant items before attention modeling.

Category Retrieval Sequence: By using fused category keys to retrieve candidate items and applying target attention, the approach improves precision ranking (AUC+1.1%, PV_GAUC+1.7%, CLICK_GAUC+1.7%) and online metrics such as pctr, uctr, uclick, and dpv.

Original Long Click Sequence Modeling: Four methods—mean pooling, fully‑connected + mean pooling, sub‑sequence extraction + mean pooling, and dynamic routing + target attention—are evaluated, with the sub‑sequence extraction + mean pooling solution selected for both coarse and fine ranking, yielding notable online gains.

Multi‑Objective Ranking: The system shifts focus from click‑through to "grass‑planting" actions (add‑to‑cart, favorite). A multi‑objective model predicts several targets simultaneously, using shared embeddings, gradient blocking, and a mixed fusion formula (α=1, β=2.5, γ=15, δ=15) that balances CTR stability with grass‑planting improvements.

Model Structure Optimizations: Various architectures such as CAN, MMoE, ESMM, and model reconstruction are explored. CAN improves offline CTR AUC by 0.3% and online clicks by 0.87%; MMoE and ESMM provide modest gains across CTR, IPV, and conversion metrics.

Loss Optimizations: Experiments with Focal Loss and GHM Loss show offline AUC improvements (~0.3%) but limited or negative online impact, leading to their exclusion from production.

LTR Layer: To flexibly combine multiple objectives, a Learning‑to‑Rank layer is added, with two approaches: Stacking (leveraging real‑time exposure‑click sequences and achieving offline AUC+1.7% and online pctr+1.52%) and Mixed Sampling (balancing grass‑planting and first‑click samples to mitigate click drop).

Normalization & Feature Engineering: Normalization aligns target score distributions around zero, facilitating weight adjustments in the fusion formula and yielding further online improvements.

The article concludes with a summary of the presented work and outlines future directions, including deeper long‑sequence modeling, universal content representation learning, and continued optimization of ranking structures and LTR methods.

model optimizationrecommendationrankingAttentionmulti-objectivelossLTR
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.