Enhancing Large Language Models for Time-Series Forecasting via Vector-Injected In-Context Learning

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models (LLMs) perform poorly on time series forecasting due to significant distributional mismatches between their pretraining data and temporal sequences, while full-parameter fine-tuning incurs prohibitive computational costs. To overcome this, the authors propose Latent Vector-Injected Context Learning (LVICL), a method that freezes the LLM’s parameters and instead introduces learnable context vector adapters to adaptively extract task-relevant information. These vectors are injected into multiple layers of the model without extending the prompt length. LVICL substantially improves performance across diverse time series forecasting benchmarks while avoiding the high computational overhead of conventional fine-tuning, effectively unlocking the contextual learning capabilities of frozen LLMs.

Technology Category

Application Category

📝 Abstract
The World Wide Web needs reliable predictive capabilities to respond to changes in user behavior and usage patterns. Time series forecasting (TSF) is a key means to achieve this goal. In recent years, the large language models (LLMs) for TSF (LLM4TSF) have achieved good performance. However, there is a significant difference between pretraining corpora and time series data, making it hard to guarantee forecasting quality when directly applying LLMs to TSF; fine-tuning LLMs can mitigate this issue, but often incurs substantial computational overhead. Thus, LLM4TSF faces a dual challenge of prediction performance and compute overhead. To address this, we aim to explore a method for improving the forecasting performance of LLM4TSF while freezing all LLM parameters to reduce computational overhead. Inspired by in-context learning (ICL), we propose LVICL. LVICL uses our vector-injected ICL to inject example information into a frozen LLM, eliciting its in-context learning ability and thereby enhancing its performance on the example-related task (i.e., TSF). Specifically, we first use the LLM together with a learnable context vector adapter to extract a context vector from multiple examples adaptively. This vector contains compressed, example-related information. Subsequently, during the forward pass, we inject this vector into every layer of the LLM to improve forecasting performance. Compared with conventional ICL that adds examples into the prompt, our vector-injected ICL does not increase prompt length; moreover, adaptively deriving a context vector from examples suppresses components harmful to forecasting, thereby improving model performance. Extensive experiments demonstrate the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

time-series forecasting
large language models
in-context learning
computational overhead
prediction performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

vector-injected in-context learning
frozen LLM
time-series forecasting
context vector adapter
parameter-efficient learning
🔎 Similar Papers
No similar papers found.