Right Answer at the Right Time - Temporal Retrieval-Augmented Generation via Graph Summarization

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG methods for temporal knowledge graph (TKG) question answering suffer from two key limitations: temporal inconsistency and low inference efficiency—stemming from a semantic-matching paradigm that neglects explicit temporal constraints, leading to temporally mismatched answers and excessive token consumption. To address these issues, we propose STAR-RAG, the first framework to integrate time-aligned rule graph modeling with a lightweight graph propagation mechanism. During retrieval, it jointly optimizes semantic relevance and temporal proximity, while incorporating retrieval-space pruning to compress candidate evidence. STAR-RAG requires no large language model fine-tuning, reducing average token usage by 32.7%. On multiple real-world TKG benchmarks, it improves answer accuracy by 4.2–8.9 percentage points, achieving both high efficiency and strong temporal consistency.

Technology Category

Application Category

📝 Abstract
Question answering in temporal knowledge graphs requires retrieval that is both time-consistent and efficient. Existing RAG methods are largely semantic and typically neglect explicit temporal constraints, which leads to time-inconsistent answers and inflated token usage. We propose STAR-RAG, a temporal GraphRAG framework that relies on two key ideas: building a time-aligned rule graph and conducting propagation on this graph to narrow the search space and prioritize semantically relevant, time-consistent evidence. This design enforces temporal proximity during retrieval, reduces the candidate set of retrieval results, and lowers token consumption without sacrificing accuracy. Compared with existing temporal RAG approaches, STAR-RAG eliminates the need for heavy model training and fine-tuning, thereby reducing computational cost and significantly simplifying deployment.Extensive experiments on real-world temporal KG datasets show that our method achieves improved answer accuracy while consuming fewer tokens than strong GraphRAG baselines.
Problem

Research questions and friction points this paper is trying to address.

Ensures time-consistent question answering in temporal knowledge graphs
Reduces token consumption and retrieval candidate set size efficiently
Eliminates heavy model training needs while maintaining high accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Builds time-aligned rule graph for retrieval
Propagates on graph to prioritize time-consistent evidence
Eliminates heavy model training and fine-tuning
🔎 Similar Papers
No similar papers found.