Epistemic Error Decomposition for Multi-step Time Series Forecasting: Rethinking Bias-Variance in Recursive and Direct Strategies

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the error mechanisms underlying recursive and direct strategies in multi-step time series forecasting, challenging the conventional simplified bias–variance trade-off interpretation. Method: We propose the first rigorous three-term error decomposition framework—partitioning multi-step prediction error into irreducible noise, structural approximation gap, and estimation variance—thereby disentangling the joint influence of model architecture, parameter estimation, and data characteristics. We analyze Jacobian amplification factors, model error propagation dynamics, and conduct empirical studies using MLPs on the ETTm1 benchmark. Contribution/Results: Under nonlinear settings, we demonstrate that recursive strategies can enhance expressive power via functional composition, simultaneously achieving low bias and high variance—contradicting classical trade-off assumptions. Our analysis reveals that optimal strategy selection depends critically on the interplay between model nonlinearity strength and noise level, yielding an interpretable, theoretically grounded decision criterion for practitioners.

Technology Category

Application Category

📝 Abstract
Multi-step forecasting is often described through a simple rule of thumb: recursive strategies are said to have high bias and low variance, while direct strategies are said to have low bias and high variance. We revisit this belief by decomposing the expected multi-step forecast error into three parts: irreducible noise, a structural approximation gap, and an estimation-variance term. For linear predictors we show that the structural gap is identically zero for any dataset. For nonlinear predictors, however, the repeated composition used in recursion can increase model expressivity, making the structural gap depend on both the model and the data. We further show that the estimation variance of the recursive strategy at any horizon can be written as the one-step variance multiplied by a Jacobian-based amplification factor that measures how sensitive the composed predictor is to parameter error. This perspective explains when recursive forecasting may simultaneously have lower bias and higher variance than direct forecasting. Experiments with multilayer perceptrons on the ETTm1 dataset confirm these findings. The results offer practical guidance for choosing between recursive and direct strategies based on model nonlinearity and noise characteristics, rather than relying on traditional bias-variance intuition.
Problem

Research questions and friction points this paper is trying to address.

Decomposing multi-step forecast error into noise, structural gap, and variance components
Analyzing bias-variance trade-offs in recursive versus direct forecasting strategies
Providing model selection guidance based on nonlinearity and noise characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes forecast error into three components
Introduces Jacobian-based amplification factor for variance
Compares recursive and direct strategies for nonlinear models
🔎 Similar Papers
No similar papers found.
R
Riku W. Green
University of Bristol
H
Huw Day
University of Bristol
Zahraa S. Abdallah
Zahraa S. Abdallah
Senior Lecturer, School of Engineering Mathematics and Technology, University of Bristol, UK
Machine LearningTime SeriesDigital HealthMulti-modalitiesXAI
T
T. M. S. Filho
University of Bristol