Artificial Intelligence 17 min read

Explainable Recommendation: Background, Development History, Graph‑Based Structured Explanations, and Natural Language Generation

This article provides a comprehensive overview of explainable recommendation, covering its motivation, evolution, graph‑based structured explanation techniques, natural‑language generation methods, recent research advances, and open challenges such as fact‑checking, low‑resource scenarios, and evaluation metrics.

DataFunSummit
DataFunSummit
DataFunSummit
Explainable Recommendation: Background, Development History, Graph‑Based Structured Explanations, and Natural Language Generation

Guest Speaker: Wang Xiting, PhD, Microsoft Asia Research Institute (Editor: Ma Yue, Kuaishou) – Produced by DataFunTalk.

Introduction: Explainable recommendation is an emerging direction in recommender systems that is closely related to knowledge graphs and natural language understanding.

Background of explainable recommendation

Development history of explainable recommendation

Progress of graph‑based structured explanations

Progress of unstructured explanations based on natural language generation

01 Explainable Recommendation Background

Traditional recommender systems only provide a score, leaving users wondering "why" a particular item is suggested. Explainable recommendation aims to give user‑friendly reasons, similar to how friends recommend a restaurant by mentioning popularity, specific dishes, or proximity, thereby increasing trust and reducing suspicion.

Industry adoption is growing: Meta, Amazon, Ele.me, Last.fm and others have added explanations to improve user experience and click‑through rates.

Traditional vs. Explainable Recommendation

Traditional models learn a function f(u, v) → r, where r is the predicted preference score. Explainable recommendation extends this by:

Using an interpretable function f

Generating an additional explanation Y (textual or other) during training

Explanation Y can be structured (graph‑based) or unstructured (natural language). The goal is to improve satisfaction, trust, and click probability.

02 Development History

Before 2015, explanations were template‑based, leading to low diversity and high manual effort. Since 2016, free‑form explanations have emerged, leveraging knowledge graphs and natural language generation to produce diverse, personalized reasons without predefined templates.

Graph‑based explanations provide high model interpretability by exposing the reasoning path, while language‑based explanations are more user‑friendly.

03 Progress of Graph‑Based Structured Explanations

Knowledge graphs (KG) enrich recommendation models with abundant side information and improve explainability. Two main KG techniques are:

KG Embedding – learns vector representations for entities and relations, preserving graph topology.

Deep‑Learning‑based KG reasoning – discovers multi‑hop paths between users and items, offering transparent, step‑by‑step reasoning.

Representative works include:

KPRN (AAAI 2019): Knowledge‑Aware Path Recurrent Network – limited to small graphs and short paths.

PGPR (2019): Policy‑Guided Path Reasoning – uses reinforcement learning to explore reasoning paths without ground‑truth.

ADAC model – leverages imperfect paths as auxiliary rewards for RL optimization.

04 Progress of Unstructured Explanations via Natural Language Generation

Early NLG approaches used hand‑crafted templates, which lacked diversity and required manual design. Recent advances employ retrieval from large text corpora and, most notably, end‑to‑end generation with large pre‑trained language models (e.g., BERT, GPT). These models can produce fluent, personalized explanations but may suffer from grammatical errors or low relevance.

State‑of‑the‑art methods combine pre‑training on Wikipedia, fine‑tuning on domain‑specific data, and reinforcement learning to directly optimize user‑click metrics.

05 Summary

Explainable recommendation is still in its early stages. Key open problems include:

Fact‑checking: ensuring the correctness of generated explanations.

Low‑resource scenarios: producing high‑quality explanations with little or no ground‑truth data.

Evaluation: designing reliable offline metrics beyond subjective user studies.

Future directions should integrate user feedback (clicks, dwell time, interests) into the model loop, making explanations a core component of recommender system design rather than an afterthought.

Thank you for listening!

Please like, share, and give a triple‑click boost at the end of the article.

artificial intelligencemachine learningrecommender systemsKnowledge GraphNatural Language Generationexplainable recommendation
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.