Relink: Turning GraphRAG into a Dynamic, Query‑Driven Knowledge Graph

Relink introduces a ‘reason‑and‑construct’ paradigm that builds knowledge‑graph paths during inference, combining a high‑precision factual graph with a high‑recall potential‑relation pool, using query‑driven dynamic path expansion and contrastive alignment to markedly improve multi‑hop QA performance and robustness to sparse knowledge.

PaperAgent
PaperAgent
PaperAgent
Relink: Turning GraphRAG into a Dynamic, Query‑Driven Knowledge Graph

Current large language models (LLMs) excel at open‑domain question answering but suffer from hallucinations. Traditional GraphRAG follows a “build‑then‑reason” pipeline that relies on a static knowledge graph (KG), leading to two major problems: incomplete KG coverage and low signal‑to‑noise ratio caused by irrelevant facts.

Core Idea: Reason‑and‑Construct

The authors propose a new “reason‑and‑construct” paradigm and the Relink framework, where the KG is dynamically built during inference to serve the query rather than forcing the query to fit a pre‑built graph.

Heterogeneous Knowledge Source Fusion

High‑precision factual graph (Gb) : a reliable but limited skeleton constructed by LLM extraction.

High‑recall potential‑relation pool (Rc) : generated from raw corpus using entity co‑occurrence (PMI) and serves to fill missing links.

Query‑Driven Dynamic Path Exploration

Coarse ranking : a lightweight trainable ranker quickly filters candidate edges.

Fine ranking : an LLM evaluates the semantic contribution of each candidate to the query.

During selection, the LLM instantiates missing triples on‑the‑fly based on the query and context, achieving “on‑demand completion”.

Unified Semantic Space Alignment

Contrastive learning with an InfoNCE loss encodes explicit facts from Gb and potential relations from Rc into a shared space, allowing the ranker to compare heterogeneous evidence uniformly.

Model Architecture and Workflow

Relink consists of three modules:

Heterogeneous source construction : Gb is built via LLM extraction; Rc is built by PMI‑filtered co‑occurrence and encoded with an encoder (Encoder_L) that masks tokens.

Dynamic path expansion & ranking : Starting from query entities, candidates from Gb and Rc are iteratively expanded, scored in the unified space, top‑K paths are kept for the next iteration, and high‑ranked relations are instantiated into concrete facts.

Evidence‑grounded answer generation : The compact evidence graph and related source sentences are fed to a generation LLM, guaranteeing accurate, verifiable, and traceable answers.

Training follows a two‑stage optimization: first freeze the encoder and train the ranker (L_rank), then freeze the ranker and align the encoder (L_contra) via contrastive loss, alternating until convergence.

Experimental Results

Evaluated on five multi‑hop QA benchmarks (2WikiMultiHopQA, HotpotQA, ConcurrentQA, MuSiQue‑Ans/Full), Relink outperforms all baselines, achieving an average gain of 5.4 % Exact Match (EM) and 5.2 % F1. Compared with the previous best HippoRAG, it improves EM by 12.0 % on HotpotQA and 32.6 % on the challenging MuSiQue‑Full.

Ablation Study

Removing Rc drops EM by 5.7 % on HotpotQA.

Removing Gb causes a 12.9 % EM collapse, highlighting the importance of a reliable skeleton.

Replacing the query‑driven ranker with generic semantic similarity reduces EM by 19.4 %.

Discarding the contrastive alignment loss decreases EM by 7.2 %.

Robustness to Knowledge Sparsity

When 90 % of edges in the explicit graph are removed, static methods lose 34.7 % F1, while Relink’s F1 only drops slightly to 0.669, demonstrating strong resilience through dynamic repair.

Case Study

A static baseline is misled by the “resides in” noisy fact, whereas Relink dynamically constructs the correct chain “composer of → born in”, and the query‑driven ranker selects this path.

Conclusion and Implications

Relink’s “reason‑and‑construct” paradigm shifts GraphRAG from “query adapts to graph” to “graph adapts to query”. Its contributions include dynamic path repair, interference‑fact filtering, and heterogeneous knowledge fusion, offering a practical solution for robust multi‑hop reasoning in incomplete knowledge environments.

Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG
https://arxiv.org/pdf/2601.07192
https://github.com/DMiC-Lab-HFUT/Relink
contrastive learningLLMKnowledge GraphGraphRAGDynamic RetrievalMulti-hop QA
PaperAgent
Written by

PaperAgent

Daily updates, analyzing cutting-edge AI research papers

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.