Spectral Predictability as a Fast Reliability Indicator for Time Series Forecasting Model Selection

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Time-series forecasting model selection faces a fundamental trade-off between high validation cost and performance risk. To address this, we propose Ω—a lightweight, spectrum-based predictability metric—as a priori criterion for model selection. For the first time, Ω systematically bridges spectral analysis and model choice: it quantifies intrinsic predictability of time-series data within seconds via frequency-domain analysis, requiring no model training. High-Ω series favor large-scale time-series foundation models, whereas low-Ω series achieve superior efficiency–accuracy trade-offs with lightweight models. Extensive experiments across 28 cross-domain datasets, 51 models, and the GIFT-Eval benchmark demonstrate that Ω exhibits strong correlation with downstream model performance, enabling effective performance stratification and reducing validation overhead by up to 70%. Theoretically grounded in signal processing, Ω offers interpretability, computational efficiency, and broad generalizability—establishing the first low-cost, theory-driven paradigm for practical time-series model selection.

Technology Category

Application Category

📝 Abstract
Practitioners deploying time series forecasting models face a dilemma: exhaustively validating dozens of models is computationally prohibitive, yet choosing the wrong model risks poor performance. We show that spectral predictability~$Omega$ -- a simple signal processing metric -- systematically stratifies model family performance, enabling fast model selection. We conduct controlled experiments in four different domains, then further expand our analysis to 51 models and 28 datasets from the GIFT-Eval benchmark. We find that large time series foundation models (TSFMs) systematically outperform lightweight task-trained baselines when $Omega$ is high, while their advantage vanishes as $Omega$ drops. Computing $Omega$ takes seconds per dataset, enabling practitioners to quickly assess whether their data suits TSFM approaches or whether simpler, cheaper models suffice. We demonstrate that $Omega$ stratifies model performance predictably, offering a practical first-pass filter that reduces validation costs while highlighting the need for models that excel on genuinely difficult (low-$Omega$) problems rather than merely optimizing easy ones.
Problem

Research questions and friction points this paper is trying to address.

Spectral predictability enables fast model selection for time series forecasting
It stratifies performance between complex foundation models and simpler baselines
The metric reduces validation costs by identifying suitable model types quickly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spectral predictability metric enables fast model selection
Computing spectral predictability takes seconds per dataset
Metric stratifies model performance to reduce validation costs
🔎 Similar Papers
2024-09-16arXiv.orgCitations: 0