Regularized Ensemble Forecasting for Learning Weights from Historical and Current Forecasts

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of effectively aggregating predictions from multiple experts when their historical performance is incomplete or dynamically evolving. The authors propose a regularized ensemble forecasting method that uniquely unifies current predictions with historical accuracy into a single modeling framework. By minimizing the variance of the combined forecast and incorporating a regularization term based on each expertโ€™s past performance, the approach learns optimal aggregation weights. The framework admits a Bayesian interpretation, accommodates flexible distributional assumptions, and naturally handles dynamic expert entry and exit. Empirical evaluations on Walmart sales and macroeconomic forecasting tasks demonstrate that the method consistently outperforms state-of-the-art baselines, regardless of whether expertsโ€™ historical records are complete.

Technology Category

Application Category

๐Ÿ“ Abstract
Combining forecasts from multiple experts often yields more accurate results than relying on a single expert. In this paper, we introduce a novel regularized ensemble method that extends the traditional linear opinion pool by leveraging both current forecasts and historical performances to set the weights. Unlike existing approaches that rely only on either the current forecasts or past accuracy, our method accounts for both sources simultaneously. It learns weights by minimizing the variance of the combined forecast (or its transformed version) while incorporating a regularization term informed by historical performances. We also show that this approach has a Bayesian interpretation. Different distributional assumptions within this Bayesian framework yield different functional forms for the variance component and the regularization term, adapting the method to various scenarios. In empirical studies on Walmart sales and macroeconomic forecasting, our ensemble outperforms leading benchmark models both when experts'full forecasting histories are available and when experts enter and exit over time, resulting in incomplete historical records. Throughout, we provide illustrative examples that show how the optimal weights are determined and, based on the empirical results, we discuss where the framework's strengths lie and when experts'past versus current forecasts are more informative.
Problem

Research questions and friction points this paper is trying to address.

ensemble forecasting
forecast combination
weight learning
historical performance
regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

regularized ensemble forecasting
forecast combination
Bayesian interpretation
dynamic weight learning
historical performance regularization
H
Han Su
Department of Statistics, The George Washington University, Washington, DC 20052
X
Xiaojia Guo
Robert H. Smith School of Business, University of Maryland, College Park, MD 20742
Xiaoke Zhang
Xiaoke Zhang
Associate Professor of Statistics, George Washington University
Functional data analysisNonparametric statisticsCausal inferenceIndividualized treatment regimesReinforcement learning