Bayesian Stacking via Proper Scoring Rule Optimization using a Gibbs Posterior

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses dynamic weight learning in ensemble multi-probability forecasting. We propose a Bayesian stacking framework grounded in the Gibbs posterior, which embeds linear pooling within a Bayesian paradigm and optimizes model weights using proper scoring rules. The framework integrates prior information and quantifies weight uncertainty via the Gibbs posterior, thereby enhancing both robustness and interpretability. Unlike conventional fixed-weight or empirically weighted ensembles, our approach enables data-driven, adaptive weight assignment and principled uncertainty characterization. In extensive simulation studies and on real-world influenza forecasting data from the 2023–24 U.S. CDC FluSight Challenge, the proposed method consistently outperforms leading ensemble baselines—including equal-weight averaging, Bayesian Model Averaging (BMA), and non-Bayesian stacking—demonstrating superior predictive accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
In collaborative forecast projects, the combining of multiple probabilistic forecasts into an ensemble is standard practice, with linear pooling being a common combination method. The weighting scheme of a linear pool should be tailored to the specific research question, and weight selection is often performed via optimizing a proper scoring rule. This is known as optimal linear pooling. Besides optimal linear pooling, Bayesian predictive synthesis has emerged as a model probability updating scheme which is more flexible than standard Bayesian model averaging and which provides a Bayesian solution to selecting model weights for a linear pool. In many problems, equally weighted linear pool forecasts often outperform forecasts constructed using sophisticated weight selection methods. Thus regularization to an equal weighting of forecasts may be a valuable addition to any weight selection method. In this manuscript, we introduce an optimal linear pool based on a Gibbs posterior over stacked model weights optimized over a proper scoring rule. The Gibbs posterior extends stacking into a Bayesian framework by allowing for optimal weight solutions to be influenced by a prior distribution, and it also provides uncertainty quantification of weights in the form of a probability distribution. We compare ensemble forecast performance with model averaging methods and equal weighted models in simulation studies and in a real data example from the 2023-24 US Centers for Disease Control FluSight competition. In both the simulation studies and the FluSight analysis, the stacked Gibbs posterior produces ensemble forecasts which often outperform the ensembles of other methods.
Problem

Research questions and friction points this paper is trying to address.

Optimizing linear pooling weights via proper scoring rules
Regularizing forecast ensembles towards equal weighting schemes
Introducing Bayesian framework for uncertainty quantification in stacking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gibbs posterior optimizes scoring rules for stacking
Bayesian framework with prior-influenced optimal weights
Provides uncertainty quantification via probability distributions
🔎 Similar Papers
No similar papers found.
S
Spencer Wadsworth
University of Connecticut, Storrs, Connecticut
Jarad Niemi
Jarad Niemi
Iowa State University
Statistics