🤖 AI Summary
This work proposes Evolutionary Forecasting (EF), a novel paradigm for long-term time series prediction that overcomes key limitations of existing direct forecasting approaches. Conventional methods require separate models for each prediction horizon and suffer from gradient conflicts that impede local dynamics modeling at longer ranges. In contrast, EF leverages short-horizon training combined with a recursive evolution mechanism to generate long-term forecasts, subsuming direct forecasting as a degenerate special case within a unified generative framework. Theoretical analysis reveals the counterintuitive advantage of short-horizon training over long-horizon alternatives. EF thus represents a paradigm shift from static mapping to autonomous evolutionary inference. Empirical results demonstrate that a single EF model outperforms ensembles of task-specific direct forecasting models on standard benchmarks and exhibits superior asymptotic stability under extreme extrapolation scenarios.
📝 Abstract
The prevailing Direct Forecasting (DF) paradigm dominates Long-term Time Series Forecasting (LTSF) by forcing models to predict the entire future horizon in a single forward pass. While efficient, this rigid coupling of output and evaluation horizons necessitates computationally prohibitive re-training for every target horizon. In this work, we uncover a counter-intuitive optimization anomaly: models trained on short horizons-when coupled with our proposed Evolutionary Forecasting (EF) paradigm-significantly outperform those trained directly on long horizons. We attribute this success to the mitigation of a fundamental optimization pathology inherent in DF, where conflicting gradients from distant futures cripple the learning of local dynamics. We establish EF as a unified generative framework, proving that DF is merely a degenerate special case of EF. Extensive experiments demonstrate that a singular EF model surpasses task-specific DF ensembles across standard benchmarks and exhibits robust asymptotic stability in extreme extrapolation. This work propels a paradigm shift in LTSF: moving from passive Static Mapping to autonomous Evolutionary Reasoning.