🤖 AI Summary
This work addresses the fundamental problem of selecting the training horizon for autoregressive models predicting dynamical systems—balancing insufficient long-term trend capture (short horizons) against optimization difficulty due to error accumulation (long horizons). We formally characterize this trade-off through the geometry of the loss landscape: in chaotic systems, training with long horizons induces exponential growth in loss ruggedness; in limit-cycle systems, the growth is linear. Our analysis integrates dynamical systems theory with optimization landscape theory and is validated numerically. Furthermore, we empirically demonstrate that models trained with longer horizons exhibit superior short-horizon generalization performance. Collectively, these results yield an interpretable, generalizable principle for training horizon selection in autoregressive forecasting. The derived error-growth laws and generalization properties are empirically confirmed across diverse dynamical systems, including chaotic, limit-cycle, and quasi-periodic regimes.
📝 Abstract
When training autoregressive models for dynamical systems, a critical question arises: how far into the future should the model be trained to predict? Too short a horizon may miss long-term trends, while too long a horizon can impede convergence due to accumulating prediction errors. In this work, we formalize this trade-off by analyzing how the geometry of the loss landscape depends on the training horizon. We prove that for chaotic systems, the loss landscape's roughness grows exponentially with the training horizon, while for limit cycles, it grows linearly, making long-horizon training inherently challenging. However, we also show that models trained on long horizons generalize well to short-term forecasts, whereas those trained on short horizons suffer exponentially (resp. linearly) worse long-term predictions in chaotic (resp. periodic) systems. We validate our theory through numerical experiments and discuss practical implications for selecting training horizons. Our results provide a principled foundation for hyperparameter optimization in autoregressive forecasting models.