🤖 AI Summary
This work addresses the computational intractability of ρ-posteriors in Bayesian robust inference, which arises from optimizing over a reference distribution. To resolve this issue, the authors develop a PAC-Bayesian framework that restores theoretical guarantees by introducing a temperature-tuned Gibbs posterior, while enabling scalable inference through variational approximations. The proposed method is the first to simultaneously ensure robustness against contaminated data, computational feasibility, and rigorous finite-sample theoretical guarantees—including oracle inequalities with explicit convergence rates. Numerical experiments demonstrate the practical effectiveness and robustness of the approach in real-world settings.
📝 Abstract
The $\rho$-posterior framework provides universal Bayesian estimation with explicit contamination rates and optimal convergence guarantees, but has remained computationally difficult due to an optimization over reference distributions that precludes intractable posterior computation. We develop a PAC-Bayesian framework that recovers these theoretical guarantees through temperature-dependent Gibbs posteriors, deriving finite-sample oracle inequalities with explicit rates and introducing tractable variational approximations that inherit the robustness properties of exact $\rho$-posteriors. Numerical experiments demonstrate that this approach achieves theoretical contamination rates while remaining computationally feasible, providing the first practical implementation of $\rho$-posterior inference with rigorous finite-sample guarantees.