Personalized Re-ranking for Recommendation (ResSys'19)
This article introduces a personalized re‑ranking model for recommendation systems, explaining the limitations of traditional point‑wise ranking, describing the PRM architecture with input, encoding, and output layers using multi‑head attention and pre‑trained personalization features, and presenting experimental results and future extensions.
Problem Background Traditional recommendation pipelines consist of recall, coarse ranking, and fine ranking, but they ignore the interactions among items displayed together, leading to a mismatch between offline predictions and online click behavior.
Model Structure The proposed PRM (Personalized Re-ranking Model) follows a multi‑head architecture with three stages: Input Layer, Encoding Layer, and Output Layer. The input layer combines raw item features, personalized cross‑features, and position embeddings. The encoding layer employs Transformer‑style self‑attention to capture pairwise item influences and user‑item interactions. The output layer aggregates the encoded representations and produces a re‑ranked list.
Input Layer Consists of three embeddings: (1) original item features from the base ranker, (2) personalized features obtained via a pre‑trained matrix linking users and items, and (3) position embeddings derived from the ranking order.
Encoding Layer Uses multi‑head self‑attention (Q, K, V) to model arbitrary item‑item interactions, followed by a feed‑forward network (FFN) that enhances cross‑dimensional feature interactions. Stacked attention blocks enable richer contextual modeling.
Output Layer Applies another FFN and produces final scores, which are trained with a softmax cross‑entropy loss to predict click probability, and items are sorted by these scores for display.
Personalized Pre‑training (Pre‑train) A separate CTR prediction network learns a user‑item interaction matrix. The pre‑trained embeddings are then incorporated into the input layer to provide personalized signals.
Experimental Results The re‑ranking model demonstrates improved click‑through rates compared to baseline ranking, as shown in the reported evaluation figures.
Extended Summary RNN and Transformer models are commonly used for re‑ranking due to their ability to capture sequential and contextual dependencies. The article also references related work on permutation‑based re‑ranking frameworks (PMatch and PRank).
References Includes links to the original paper "Personalized Re‑ranking for Recommendation" and related resources.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.