RECIPE-TKG: From Sparse History to Structured Reasoning for LLM-based Temporal Knowledge Graph Completion

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the strong supervision dependency and poor generalization of large language models (LLMs) in temporal knowledge graph (TKG) completion under historical fact sparsity, this paper proposes a rule-driven, three-stage collaborative framework: (1) multi-hop retrieval grounded in logical rules to enhance evidence acquisition in low-resource settings; (2) lightweight adapter-based contrastive fine-tuning to improve robustness in temporal semantic modeling; and (3) embedding-similarity-guided semantic filtering at test time to suppress hallucination. Evaluated on four TKG benchmarks, our method achieves up to a 30.6% improvement in Hits@10, significantly enhancing semantic consistency of predictions and generalization performance on historically sparse samples. The framework establishes a novel, interpretable, and highly robust paradigm for few-shot temporal reasoning.

Technology Category

Application Category

📝 Abstract
Temporal Knowledge Graphs (TKGs) represent dynamic facts as timestamped relations between entities. TKG completion involves forecasting missing or future links, requiring models to reason over time-evolving structure. While LLMs show promise for this task, existing approaches often overemphasize supervised fine-tuning and struggle particularly when historical evidence is limited or missing. We introduce RECIPE-TKG, a lightweight and data-efficient framework designed to improve accuracy and generalization in settings with sparse historical context. It combines (1) rule-based multi-hop retrieval for structurally diverse history, (2) contrastive fine-tuning of lightweight adapters to encode relational semantics, and (3) test-time semantic filtering to iteratively refine generations based on embedding similarity. Experiments on four TKG benchmarks show that RECIPE-TKG outperforms previous LLM-based approaches, achieving up to 30.6% relative improvement in Hits@10. Moreover, our proposed framework produces more semantically coherent predictions, even for the samples with limited historical context.
Problem

Research questions and friction points this paper is trying to address.

Forecasting missing links in Temporal Knowledge Graphs (TKGs)
Addressing sparse or missing historical evidence in TKG completion
Improving accuracy and generalization for LLM-based TKG reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rule-based multi-hop retrieval for diverse history
Contrastive fine-tuning of lightweight adapters
Test-time semantic filtering for iterative refinement