🤖 AI Summary
Time-series forecasting model selection faces a fundamental trade-off between high validation cost and performance risk. To address this, we propose Ω—a lightweight, spectrum-based predictability metric—as a priori criterion for model selection. For the first time, Ω systematically bridges spectral analysis and model choice: it quantifies intrinsic predictability of time-series data within seconds via frequency-domain analysis, requiring no model training. High-Ω series favor large-scale time-series foundation models, whereas low-Ω series achieve superior efficiency–accuracy trade-offs with lightweight models. Extensive experiments across 28 cross-domain datasets, 51 models, and the GIFT-Eval benchmark demonstrate that Ω exhibits strong correlation with downstream model performance, enabling effective performance stratification and reducing validation overhead by up to 70%. Theoretically grounded in signal processing, Ω offers interpretability, computational efficiency, and broad generalizability—establishing the first low-cost, theory-driven paradigm for practical time-series model selection.
📝 Abstract
Practitioners deploying time series forecasting models face a dilemma: exhaustively validating dozens of models is computationally prohibitive, yet choosing the wrong model risks poor performance. We show that spectral predictability~$Omega$ -- a simple signal processing metric -- systematically stratifies model family performance, enabling fast model selection. We conduct controlled experiments in four different domains, then further expand our analysis to 51 models and 28 datasets from the GIFT-Eval benchmark. We find that large time series foundation models (TSFMs) systematically outperform lightweight task-trained baselines when $Omega$ is high, while their advantage vanishes as $Omega$ drops. Computing $Omega$ takes seconds per dataset, enabling practitioners to quickly assess whether their data suits TSFM approaches or whether simpler, cheaper models suffice. We demonstrate that $Omega$ stratifies model performance predictably, offering a practical first-pass filter that reduces validation costs while highlighting the need for models that excel on genuinely difficult (low-$Omega$) problems rather than merely optimizing easy ones.