🤖 AI Summary
Existing RAG methods for temporal knowledge graph (TKG) question answering suffer from two key limitations: temporal inconsistency and low inference efficiency—stemming from a semantic-matching paradigm that neglects explicit temporal constraints, leading to temporally mismatched answers and excessive token consumption. To address these issues, we propose STAR-RAG, the first framework to integrate time-aligned rule graph modeling with a lightweight graph propagation mechanism. During retrieval, it jointly optimizes semantic relevance and temporal proximity, while incorporating retrieval-space pruning to compress candidate evidence. STAR-RAG requires no large language model fine-tuning, reducing average token usage by 32.7%. On multiple real-world TKG benchmarks, it improves answer accuracy by 4.2–8.9 percentage points, achieving both high efficiency and strong temporal consistency.
📝 Abstract
Question answering in temporal knowledge graphs requires retrieval that is both time-consistent and efficient. Existing RAG methods are largely semantic and typically neglect explicit temporal constraints, which leads to time-inconsistent answers and inflated token usage. We propose STAR-RAG, a temporal GraphRAG framework that relies on two key ideas: building a time-aligned rule graph and conducting propagation on this graph to narrow the search space and prioritize semantically relevant, time-consistent evidence. This design enforces temporal proximity during retrieval, reduces the candidate set of retrieval results, and lowers token consumption without sacrificing accuracy. Compared with existing temporal RAG approaches, STAR-RAG eliminates the need for heavy model training and fine-tuning, thereby reducing computational cost and significantly simplifying deployment.Extensive experiments on real-world temporal KG datasets show that our method achieves improved answer accuracy while consuming fewer tokens than strong GraphRAG baselines.