Artificial Intelligence 13 min read

Applying Large Language Models to Recommendation Systems at Ant Group

The article presents Ant Group's research on integrating large language models into recommendation pipelines, covering background challenges, knowledge extraction, teacher‑model distillation, efficient deployment, experimental results, and future directions to improve accuracy and reduce bias.

DataFunSummit
DataFunSummit
DataFunSummit
Applying Large Language Models to Recommendation Systems at Ant Group

Introduction: This article shares Ant Group's research and deployment of large models in recommendation scenarios.

Background: Traditional recommendation pipelines suffer from exposure and popularity bias; integrating large language models can inject world knowledge to mitigate bias.

Approach 1 – Knowledge Extraction: A two‑stage pipeline where the LLM generates structured or textual knowledge graphs from online corpora, using relation type selection, entity generation, and prompt‑driven filtering.

Approach 2 – LLM as Teacher: Distilling GPT‑3.5/ChatGPT into smaller models (LLAMA2‑7B) to provide reasoning‑rich recommendation rationales, followed by generative loss fine‑tuning and embedding of the rationales.

Approach 3 – Efficient Deployment: Producing seed‑user knowledge embeddings and serving them online, reducing per‑user inference cost while preserving performance.

Experiments: Evaluated on backbone models GRU4Rec, SASRec, SRGNN; the DLLM2Rec model improves long‑tail recommendation, reduces popularity bias, and outperforms baseline methods.

Challenges & Future Work: Issues with knowledge reliability, cross‑modal distillation, and ranking‑based distillation are addressed with ranking loss and embedding alignment.

Q&A: Discussed model size (LLAMA2‑7B), offline vs online inference, and handling data sparsity in low‑frequency scenarios.

Conclusion: Large models can be integrated into production recommendation systems through knowledge extraction, teacher‑student distillation, and efficient serving, delivering better accuracy and robustness.

AILLMRecommendation systemsknowledge graphModel Distillation
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.