Artificial Intelligence 17 min read

Graph Neural Networks for Recommendation Systems: From Recall to Re‑ranking

This article reviews how graph neural networks are applied across the three stages of recommendation systems—recall, ranking, and re‑ranking—detailing novel models such as NIA‑GCN, GraphSAIL, and DGENN, their experimental improvements, and future research directions.

DataFunTalk
DataFunTalk
DataFunTalk
Graph Neural Networks for Recommendation Systems: From Recall to Re‑ranking

Personalized recommendation systems filter massive online information for users, and the rich relationships among users, items, and interactions can naturally be modeled as graphs, making graph neural networks (GNNs) a powerful tool for representation learning in this domain.

The typical recommendation pipeline consists of a recall stage that selects a few thousand candidates from a million‑scale pool, a ranking stage that narrows these to a few hundred, and a re‑ranking stage that finalizes a short list for the user; GNNs have been introduced at each stage to capture graph‑structured signals and improve performance.

In the recall stage, Huawei proposes NIA‑GCN, a neighbor‑interaction‑aware GCN that treats the user‑item bipartite graph as a heterogeneous graph, enriches aggregation with Hadamard‑product interactions, and adds parallel aggregation of first‑ and second‑order neighbors. Experiments on four datasets show 2.9%–21.8% gains over baselines. To make GCNs lightweight for industrial recall, GraphSAIL is introduced, employing incremental learning with regularization, local‑structure distillation, and global‑structure distillation to avoid catastrophic forgetting.

For the ranking stage, DGENN constructs a heterogeneous graph containing user/item attribute graphs, user‑user similarity graphs, item‑item co‑occurrence graphs, and user‑item collaborative graphs. A divide‑and‑conquer strategy builds single‑attribute graphs before merging, while curriculum learning first learns separate user and item embeddings and then their collaborative relations. This plug‑in approach consistently improves several CTR models (PNN, DIN, FiGNN) across multiple datasets.

In the re‑ranking stage, GNNs model item‑item relationships (complementary or substitutable) together with user‑item edges derived from initial ranking scores. By propagating messages on this heterogeneous graph, the system generates personalized re‑ranking scores that outperform baselines on Amazon data, demonstrating the importance of both user and item representations.

The authors conclude that GNNs have broad potential in multi‑behavior, multi‑scenario, and multimodal recommendation settings, emphasizing the need for better heterogeneous‑graph modeling and more efficient training techniques such as incremental learning and pre‑training.

A short Q&A follows, discussing details of NIA‑GCN’s second‑order neighbor aggregation, DGENN’s graph construction, and how business rules can be incorporated into re‑ranking graphs.

RankingRecommendation systemsGraph Neural Networksincremental learningheterogeneous graphGNN recall
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.