RNNLogic: Learning Logic Rules for Knowledge Graph Reasoning

This article reviews recent advances in knowledge graph reasoning, introduces the RNNLogic framework that jointly learns a rule‑generating LSTM and a stochastic logic programming predictor, and demonstrates its competitive performance and interpretability on benchmark datasets while outlining future neural‑symbolic directions.

DataFunTalk
DataFunTalk
DataFunTalk
RNNLogic: Learning Logic Rules for Knowledge Graph Reasoning

Knowledge graphs (KGs) represent factual triples (h, r, t) and are widely used in recommendation, medicine, and other domains, but they are often incomplete, making KG completion (relation prediction) a crucial research problem.

Common KG reasoning methods include embedding‑based approaches that map entities and relations to vectors, which achieve good prediction accuracy but lack interpretability, as well as inductive logic programming and reinforcement‑learning‑based methods that aim to learn logical rules.

The RNNLogic framework addresses the limitations of previous methods by jointly training a generator (an LSTM that produces chain‑structured logical rules) and a predictor (a stochastic logic programming model that evaluates these rules on the KG). The generator outputs weak rules with associated probabilities, while the predictor assigns weights to rule‑based walks to compute answer scores.

Training proceeds by: (1) generating many candidate rules for a given query; (2) feeding both the rules and the KG to the predictor to maximize the probability of correct answers; (3) selecting high‑quality rules via posterior inference that combines generator likelihoods and predictor scores; and (4) feeding the selected rules back to improve the generator.

Experiments on standard benchmarks (FB15K and WN18RR) show that RNNLogic attains performance comparable to embedding methods while offering superior interpretability. Moreover, a small number of generated rules (e.g., 10–100) suffices to achieve strong results, and extending the predictor with both LSTM‑based and KG‑embedding scores (RNNLogic+) further improves accuracy, especially on sparse graphs like WN18RR.

Future work includes developing more powerful neural‑symbolic models and integrating textual information with KG reasoning, as real‑world applications often involve mixed graph‑text data.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AIreasoningknowledge graphneural-symboliclogic rulesRNNLogic
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.