Prequential posteriors

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep generative forecasting models (DGFMs) pose a challenge for Bayesian data assimilation due to intractable likelihoods. To address this, we propose a prognostic posterior distribution method based on prediction-sequence loss for dynamically updating time-series forecasting models. Our key contribution is the introduction of the “prognostic posterior” concept and a novel coherence criterion that ensures parameter convergence to the optimal predictive performance point—even under model misspecification—thereby relaxing the stringent requirement of correct model specification inherent in conventional Bayesian inference. The method integrates waste-free parallel sequential Monte Carlo sampling with preconditioned gradient kernels to enable efficient high-dimensional parameter inference. Experiments on synthetic multivariate time series and real-world meteorological data demonstrate substantial improvements in scalability and long-horizon forecasting accuracy for data assimilation.

Technology Category

Application Category

📝 Abstract
Data assimilation is a fundamental task in updating forecasting models upon observing new data, with applications ranging from weather prediction to online reinforcement learning. Deep generative forecasting models (DGFMs) have shown excellent performance in these areas, but assimilating data into such models is challenging due to their intractable likelihood functions. This limitation restricts the use of standard Bayesian data assimilation methodologies for DGFMs. To overcome this, we introduce prequential posteriors, based upon a predictive-sequential (prequential) loss function; an approach naturally suited for temporally dependent data which is the focus of forecasting tasks. Since the true data-generating process often lies outside the assumed model class, we adopt an alternative notion of consistency and prove that, under mild conditions, both the prequential loss minimizer and the prequential posterior concentrate around parameters with optimal predictive performance. For scalable inference, we employ easily parallelizable wastefree sequential Monte Carlo (SMC) samplers with preconditioned gradient-based kernels, enabling efficient exploration of high-dimensional parameter spaces such as those in DGFMs. We validate our method on both a synthetic multi-dimensional time series and a real-world meteorological dataset; highlighting its practical utility for data assimilation for complex dynamical systems.
Problem

Research questions and friction points this paper is trying to address.

Overcoming intractable likelihoods in deep generative forecasting models
Enabling Bayesian data assimilation for time-dependent forecasting tasks
Developing scalable inference for high-dimensional parameter spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prequential posteriors for intractable likelihood models
Wastefree SMC samplers with gradient-based kernels
Concentration around optimal predictive performance parameters
🔎 Similar Papers
No similar papers found.
S
Shreya Sinha-Roy
Department of Statistics, University of Warwick
Richard G. Everitt
Richard G. Everitt
University of Warwick
StatisticsBayesian statisticsMonte Carlo methodsStatistical GeneticsGenomics
C
Christian P. Robert
Department of Statistics, University of Warwick
R
Ritabrata Dutta
Department of Statistics, University of Warwick