Temporal Knowledge Graph Question Answering: The TSQA Approach and Experimental Evaluation
This article presents a comprehensive overview of temporal knowledge graphs, outlines the challenges of building question‑answering systems over them, introduces the TSQA method with its three‑step pipeline for time‑sensitive reasoning, and reports experimental results showing significant improvements on complex queries.
Temporal Knowledge Graphs (TKGs) extend static knowledge graphs by adding a temporal dimension, turning each fact into a four‑tuple (head, relation, tail, time). This enables modeling of time‑dependent facts such as "Barack Obama was US President from 2009 to 2017".
The paper first categorises TKG‑based QA problems into simple (single‑hop) and complex (multi‑step) queries, describing three types of complex questions: Before/After, First/Last, and Time‑Join. It then discusses two families of TKG representation methods: (1) converting four‑tuples into time‑specific triples (e.g., HyTE, Temp) and (2) directly modeling four‑tuples (e.g., extensions of TransE, TComplEx).
Building on these insights, the authors propose the TSQA framework, which consists of three modules: (1) extracting core keywords from the question to retrieve a focused sub‑graph, (2) a Time Estimation component that predicts the relevant timestamp using embeddings from TComplEx and a language model, and (3) an entity prediction step that uses the estimated time together with known entities to infer the missing answer.
To improve time‑word sensitivity, TSQA generates contrastive question pairs (e.g., swapping "before" with "after" or "first" with "last") and trains a dual‑model architecture with two losses: a temporal order loss and an entity‑non‑overlap loss. Additionally, a time‑position embedding inspired by transformer positional encodings and a binary‑cross‑entropy loss are introduced to capture the ordering of events.
Experiments on a benchmark dataset from an ACL‑2021 paper compare TSQA with baseline models (EmbedKGQA, T‑EaE‑add, T‑EaE‑replace, CronKGQA). TSQA achieves the largest overall gain, especially reducing errors on complex queries by 32 %. Ablation studies reveal that the Neighbor Graph selection and the Time Estimation module contribute most to performance.
In conclusion, TSQA enhances temporal sensitivity in TKG‑based QA by jointly improving time reasoning, time‑word awareness, and temporal representation learning, demonstrating the effectiveness of the proposed modules across various query types.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.