Ignite Forecasting with SPARK: An Efficient Generative Framework for Refining LLMs in Temporal Knowledge Graph Forecasting

πŸ“… 2025-03-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses three key bottlenecks in applying large language models (LLMs) to temporal knowledge graph (TKG) forecasting: limited input length, low generation efficiency, and prohibitive costs of full-parameter fine-tuning. To this end, we propose SPARKβ€”a lightweight, plug-and-play framework. Methodologically, SPARK introduces (1) a novel sequence-level beam search generation paradigm, reformulating TKG prediction as a top-K sequence generation task, enabling entity distribution output via a single forward pass; and (2) a trainable adapter grounded in classical TKG models (e.g., TransE, ComplEx) to inject global structural priors, eliminating the need for full LLM fine-tuning. Experiments demonstrate that SPARK consistently outperforms state-of-the-art methods across multiple benchmarks, achieves 3.2Γ— inference speedup, reduces GPU memory consumption by 68%, and exhibits strong generalization and zero-shot transfer capability.

Technology Category

Application Category

πŸ“ Abstract
Temporal Knowledge Graph (TKG) forecasting is crucial for predicting future events using historical data. With the surge of Large Language Models (LLMs), recent studies have begun exploring their integration into TKG forecasting and achieved some success. However, they still face limitations such as limited input length, inefficient output generation, and resource-intensive refinement, which undermine their performance and practical applicability. To address these limitations, we introduce SPARK, a Sequence-level Proxy-Adapting framework for Refining LLMs in TKG forecasting. Inspired by inference-time algorithms adopted in controlling generation, SPARK offers a cost-effective, plug-and-play solution through two key innovations: (1) Beam Sequence-Level Generation, which reframes TKG forecasting as a top-K sequence-level generation task, using beam search for efficiently generating next-entity distribution in a single forward pass. (2) TKG Adapter for Refinement, which employs traditional TKG models as trainable proxy adapters to leverage global graph information and refine LLM outputs, overcoming both the input length and the resource-intensive fine-tuning problems. Experiments across diverse datasets validate SPARK's forecasting performance, robust generalization capabilities, and high efficiency. We release source codes at https://github.com/yin-gz/SPARK.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limited input length in LLMs for TKG forecasting
Improving inefficient output generation in TKG forecasting
Reducing resource-intensive refinement of LLMs for TKG forecasting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Beam Sequence-Level Generation for efficient forecasting
TKG Adapter leverages global graph information
Plug-and-play solution for refining LLMs
πŸ”Ž Similar Papers
No similar papers found.
G
Gongzhu Yin
School of Cyberspace Science, Harbin Institute of Technology, Harbin, China
H
Hongli Zhang
School of Cyberspace Science, Harbin Institute of Technology, Harbin, China
Y
Yi Luo
School of Cyberspace Science, Harbin Institute of Technology, Harbin, China
Y
Yuchen Yang
School of Cyberspace Science, Harbin Institute of Technology, Harbin, China
Kun Lu
Kun Lu
University of Alabama
Applied natural language processingLarge language modelsText mining
C
Chao Meng
School of Cyberspace Science, Harbin Institute of Technology, Harbin, China