Temporal horizons in forecasting: a performance-learnability trade-off

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental problem of selecting the training horizon for autoregressive models predicting dynamical systems—balancing insufficient long-term trend capture (short horizons) against optimization difficulty due to error accumulation (long horizons). We formally characterize this trade-off through the geometry of the loss landscape: in chaotic systems, training with long horizons induces exponential growth in loss ruggedness; in limit-cycle systems, the growth is linear. Our analysis integrates dynamical systems theory with optimization landscape theory and is validated numerically. Furthermore, we empirically demonstrate that models trained with longer horizons exhibit superior short-horizon generalization performance. Collectively, these results yield an interpretable, generalizable principle for training horizon selection in autoregressive forecasting. The derived error-growth laws and generalization properties are empirically confirmed across diverse dynamical systems, including chaotic, limit-cycle, and quasi-periodic regimes.

Technology Category

Application Category

📝 Abstract
When training autoregressive models for dynamical systems, a critical question arises: how far into the future should the model be trained to predict? Too short a horizon may miss long-term trends, while too long a horizon can impede convergence due to accumulating prediction errors. In this work, we formalize this trade-off by analyzing how the geometry of the loss landscape depends on the training horizon. We prove that for chaotic systems, the loss landscape's roughness grows exponentially with the training horizon, while for limit cycles, it grows linearly, making long-horizon training inherently challenging. However, we also show that models trained on long horizons generalize well to short-term forecasts, whereas those trained on short horizons suffer exponentially (resp. linearly) worse long-term predictions in chaotic (resp. periodic) systems. We validate our theory through numerical experiments and discuss practical implications for selecting training horizons. Our results provide a principled foundation for hyperparameter optimization in autoregressive forecasting models.
Problem

Research questions and friction points this paper is trying to address.

Balancing prediction horizon and model learnability in autoregressive forecasting
Analyzing loss landscape geometry's dependence on training horizon length
Comparing chaotic vs. periodic systems' long-horizon training challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes loss landscape geometry dependence on horizon
Proves roughness grows exponentially in chaotic systems
Shows long-horizon training improves short-term generalization
🔎 Similar Papers
No similar papers found.
Pau Vilimelis Aceituno
Pau Vilimelis Aceituno
Institute for Neuroinformatics, ETH Zurich
Computational NeuroscienceMachine LearningComplex Systems
Jack William Miller
Jack William Miller
CTO & Co-Founder of HelmGuard AI (prev. ANU)
artificial intelligence
N
Noah Marti
Institute of Neuroinformatics, ETH Zürich and University of Zürich, Winterhurerstrasse 190, Zürich 8057, Switzerland
Youssef Farag
Youssef Farag
Graduate Student, ETHZ/UZH
Machine LearningArtificial IntelligenceComputational Neuroscience
V
V. Boussange
Unit of Land Change Science, Swiss Federal Research Institute for Forest, Snow and Landscape Zürcherstrasse 111, Birmensdorf 8903, Switzerland