Artificial Intelligence 13 min read

Exploring and Applying Large Language Models in Recommendation Systems

Professor Wang Yichao from Huawei Noah's Ark Lab presents a comprehensive exploration of large language models in recommendation systems, covering background, challenges, two key projects (LLM4Rec and Uni-CTR), experimental results, and future directions for open, knowledge‑enhanced, generative recommendation pipelines.

DataFunSummit
DataFunSummit
DataFunSummit
Exploring and Applying Large Language Models in Recommendation Systems

In this talk, Wang Yichao, a senior engineer at Huawei Noah's Ark Lab, introduces the use of large language models (LLMs) for recommendation systems, outlining the motivation, problems, and potential benefits of integrating open‑world knowledge and reasoning capabilities into traditional recommender pipelines.

The presentation first describes the background of conventional recommendation systems, which rely on closed‑loop interaction data and struggle with cold‑start and sparse scenarios, and then explains how LLMs can provide factual knowledge and logical inference to enrich user and item representations.

Two major projects are highlighted: LLM4Rec (KAR) , which leverages LLMs to generate and compress knowledge about user preferences and item facts into dense vectors for downstream models, and Uni-CTR , a multi‑scene recommendation foundation that uses LLM‑driven prompts to encode structured data and a leader‑backbone architecture to capture both scene‑specific and shared representations.

Extensive experiments on Huawei’s advertising, music, and app marketplaces, as well as on public Amazon Review datasets (Fashion, Music Instruments, Gift Cards), demonstrate that incorporating LLM‑derived knowledge consistently improves AUC by around 1 % and yields balanced performance across multiple scenarios, while maintaining comparable inference latency.

The talk concludes with challenges—joint modeling of collaborative and semantic signals, efficient input encoding, and real‑time data integration—and a forward‑looking vision that recommendation systems will evolve from discriminative, multi‑stage pipelines to end‑to‑end generative models powered by open‑world LLMs.

AIlarge language modelsRecommendation systemsLLM4RecUni-CTRknowledge integration
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.