Artificial Intelligence 13 min read

Exploring and Applying Large Language Models in Recommendation Systems

The talk by Huawei Noah's Ark Lab researcher Wang Yichao presents a comprehensive exploration of large language models (LLMs) for recommendation systems, covering background challenges, the KAR and Uni-CTR projects, experimental results, and future directions for open‑world, generative recommendation pipelines.

DataFunTalk
DataFunTalk
DataFunTalk
Exploring and Applying Large Language Models in Recommendation Systems

Overview – Wang Yichao from Huawei Noah's Ark Lab delivered a presentation on the exploration and application of large language models (LLMs) in recommendation systems, discussing data, model, and workflow aspects, and introducing two key Huawei projects while addressing user reasoning knowledge construction, feature crossing, and online service processes.

Background and Problem – Traditional recommendation systems are closed, relying on domain‑specific logs and lacking external world knowledge. LLMs bring rich factual and commonsense knowledge plus logical reasoning, which can supplement testing, scoring, and workflow control, either via fine‑tuning during training or as inference engines.

LLM4Rec – KAR (Knowledge‑Assisted Recommendation) – KAR uses LLMs to generate open‑world knowledge for user preference and item factual reasoning, transforms this knowledge into low‑dimensional dense vectors via a multi‑expert network, and integrates it with traditional recommenders to improve AUC by ~1% while keeping inference latency comparable to baseline models.

LLM4Rec – Uni‑CTR (Unified Cross‑Domain Recommendation) – Uni‑CTR leverages LLMs to build a multi‑scene recommendation foundation. It converts tabular features into natural‑language prompts, feeds them to a 24‑layer Transformer (SharedBert), and employs Leader and Backbone networks to capture scene‑specific and shared representations, achieving zero‑shot capability and balanced performance across Fashion, Music Instruments, and Gift Cards datasets.

Experimental Results – Both KAR and Uni‑CTR were deployed in Huawei’s app market, music, and advertising scenarios. Experiments on Amazon Review datasets show significant AUC improvements, especially in sparse domains, and demonstrate that combining user‑preference and item‑factual knowledge yields the best gains.

Challenges and Outlook – Remaining challenges include joint modeling of collaborative and semantic signals, efficient handling of long textual inputs and ID encoding, and real‑time integration of dynamic data. Future work will focus on opening recommendation systems to world knowledge, shifting from discriminative to generative models, and moving toward end‑to‑end unified recommendation pipelines.

Artificial Intelligencemachine learningLLMrecommendation systemsKARUni-CTR
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.