SVTime: Small Time Series Forecasting Models Informed by "Physics" of Large Vision Model Forecasters

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large models for long-term time series forecasting suffer from high energy consumption and hardware demands, while lightweight models often lack sufficient predictive performance. Method: This paper proposes a physics-informed lightweight modeling paradigm, wherein interpretable inductive biases—such as temporal smoothness and local-global coupling—are distilled from large vision model forecasting behaviors and encoded as prior structures comprising linear layers and constraint functions; knowledge distillation is further integrated for parameter-efficient optimization. Contribution/Results: The resulting model achieves competitive accuracy with state-of-the-art large models across eight benchmark datasets, while reducing parameter count by three orders of magnitude (10³×). It significantly improves both training and inference efficiency, offering a scalable, resource-efficient solution for high-performance time series forecasting in low-resource settings.

Technology Category

Application Category

📝 Abstract
Time series AI is crucial for analyzing dynamic web content, driving a surge of pre-trained large models known for their strong knowledge encoding and transfer capabilities across diverse tasks. However, given their energy-intensive training, inference, and hardware demands, using large models as a one-fits-all solution raises serious concerns about carbon footprint and sustainability. For a specific task, a compact yet specialized, high-performing model may be more practical and affordable, especially for resource-constrained users such as small businesses. This motivates the question: Can we build cost-effective lightweight models with large-model-like performance on core tasks such as forecasting? This paper addresses this question by introducing SVTime, a novel Small model inspired by large Vision model (LVM) forecasters for long-term Time series forecasting (LTSF). Recently, LVMs have been shown as powerful tools for LTSF. We identify a set of key inductive biases of LVM forecasters -- analogous to the "physics" governing their behaviors in LTSF -- and design small models that encode these biases through meticulously crafted linear layers and constraint functions. Across 21 baselines spanning lightweight, complex, and pre-trained large models on 8 benchmark datasets, SVTime outperforms state-of-the-art (SOTA) lightweight models and rivals large models with 10^3 fewer parameters than LVMs, while enabling efficient training and inference in low-resource settings.
Problem

Research questions and friction points this paper is trying to address.

Developing lightweight time series models with large-model performance
Reducing computational costs while maintaining forecasting accuracy
Encoding inductive biases from vision models into compact architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small models encode LVM inductive biases
Linear layers and constraint functions implement physics
Efficient training with 10^3 fewer parameters than LVMs
🔎 Similar Papers
No similar papers found.