LoFT-LLM: Low-Frequency Time-Series Forecasting with Large Language Models

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of time-series forecasting under low-data, high-noise, and highly dynamic conditions in finance and energy domains, this paper proposes a frequency-aware forecasting framework. Methodologically, it introduces the novel “low-frequency-first” paradigm: stable low-frequency trends are extracted via spectral patching, while residual networks model high-frequency noise; auxiliary variables and structured domain knowledge are integrated through fine-tuned large language models (LLMs), supported by joint time-frequency modeling and semantic calibration mechanisms. The key contributions include effective decoupling of trend and noise components, enhanced few-shot robustness, and improved interpretability. Extensive experiments on multi-source financial and energy datasets demonstrate that our method consistently outperforms state-of-the-art baselines—under both full-data and few-shot settings—in prediction accuracy, noise resilience, and interpretability.

Technology Category

Application Category

📝 Abstract
Time-series forecasting in real-world applications such as finance and energy often faces challenges due to limited training data and complex, noisy temporal dynamics. Existing deep forecasting models typically supervise predictions using full-length temporal windows, which include substantial high-frequency noise and obscure long-term trends. Moreover, auxiliary variables containing rich domain-specific information are often underutilized, especially in few-shot settings. To address these challenges, we propose LoFT-LLM, a frequency-aware forecasting pipeline that integrates low-frequency learning with semantic calibration via a large language model (LLM). Firstly, a Patch Low-Frequency forecasting Module (PLFM) extracts stable low-frequency trends from localized spectral patches. Secondly, a residual learner then models high-frequency variations. Finally, a fine-tuned LLM refines the predictions by incorporating auxiliary context and domain knowledge through structured natural language prompts. Extensive experiments on financial and energy datasets demonstrate that LoFT-LLM significantly outperforms strong baselines under both full-data and few-shot regimes, delivering superior accuracy, robustness, and interpretability.
Problem

Research questions and friction points this paper is trying to address.

Forecasting with limited data and noisy dynamics
Underutilizing auxiliary variables in few-shot settings
Integrating low-frequency trends and semantic calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-frequency trend extraction from spectral patches
Residual learner models high-frequency variations
LLM refines predictions with auxiliary context
🔎 Similar Papers