Survey of Recent Generative AI Approaches for Recommender Systems
This article reviews a series of recent research papers that explore how generative AI, especially large language models, can be integrated into recommender systems to improve next‑basket recommendation, instruction‑following, zero‑shot ranking, fairness evaluation, and generative retrieval paradigms.
With the rapid emergence of ChatGPT and the release of GPT‑4, generative AI has attracted unprecedented attention, achieving remarkable results on various NLP and CV tasks. In the field of recommender systems (RS), researchers are investigating how to effectively incorporate generative AI despite challenges such as the user/item ID paradigm and domain‑specific knowledge.
1. Generative Next‑Basket Recommendation
The paper "GeRec" (RecSys'23) proposes a self‑autoregressive model that generates the next shopping basket by considering multi‑granular user preferences at both item and basket levels, as well as the relationships among items within the basket, thereby improving relevance and diversity.
2. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach
InstructRec formalizes recommendation instructions into three factors—preference, intent, and task form—and uses a self‑instruct pipeline to generate instruction data with a teacher‑LLM, fine‑tuning Flan‑T5 to understand natural‑language commands for personalized recommendation.
3. Large Language Models are Zero‑Shot Rankers for Recommender Systems
This work evaluates LLMs as zero‑shot rankers by framing recommendation as a conditional ranking task, designing prompts that combine user history, candidate items, and ranking instructions; results show LLMs can personalize rankings but exhibit position and popularity biases.
4. Can ChatGPT Make Fair Recommendation? A Fairness Evaluation Benchmark for Recommendation with Large Language Model
The FaiRLLM benchmark introduces metrics and a dataset covering two recommendation scenarios (music and movies) with eight sensitive attributes, comparing model outputs under neutral and sensitive instructions; experiments reveal persistent unfairness in ChatGPT recommendations.
5. Generative Recommendation: Towards Next‑generation Recommender Paradigm
GeneRec proposes a generative recommendation paradigm that (1) generates personalized content via generative AI and (2) incorporates user instructions to guide generation, using an AI editor and AI creator to customize existing items or create new ones.
6. Large Language Model Augmented Narrative Driven Recommendations
MINT leverages GPT‑3 to generate narrative queries from user‑provided preferences and contexts, then uses FLAN‑T5 to retrieve and rank candidate items based on relevance to the narrative, demonstrating effective low‑parameter retrieval models.
7. FANS: Fast Non‑Autoregressive Sequence Generation for Item List Continuation
FANS introduces a non‑autoregressive Transformer‑based model with a two‑stage classifier to generate the next K items in a list, achieving significant inference speedups while maintaining recommendation quality, validated in industrial settings.
8. Generative Sequential Recommendation with GPTRec
GPTRec employs a GPT‑2 backbone with SVD Tokenisation to split item IDs into sub‑IDs, reducing embedding size by 40%, and introduces a Next‑K recommendation strategy that matches SASRec performance while offering more flexible output.
9. GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation
GPT4Rec combines GPT‑2 generation of search queries from user history with BM25 retrieval, enabling diverse recall and better coverage of user interests, addressing limitations of traditional models regarding content utilization and interpretability.
10. Recommender Systems with Generative Retrieval
TIGER proposes a single‑stage generative retrieval paradigm that predicts semantic IDs for candidate items via autoregressive decoding, merging quantization and generation to improve performance and cold‑start recommendation compared to traditional two‑stage pipelines.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.