🤖 AI Summary
This work addresses the fragility of Bayesian experimental design under model misspecification by formulating it as a minimax game between an experimenter and an adversarial nature subject to information-theoretic constraints. By introducing Sibson α-mutual information as a robustness objective, the approach establishes the α-tilted posterior as the belief update rule and integrates the PAC-Bayes framework to control finite-sample errors. The resulting method yields a high-probability lower bound on the robust expected information gain and substantially reduces both bias and variance in nested Monte Carlo estimators. Consequently, it enhances the robustness and practical utility of experimental design in the presence of model uncertainty.
📝 Abstract
We address the brittleness of Bayesian experimental design under model misspecification by formulating the problem as a max--min game between the experimenter and an adversarial nature subject to information-theoretic constraints. We demonstrate that this approach yields a robust objective governed by Sibson's $α$-mutual information~(MI), which identifies the $α$-tilted posterior as the robust belief update and establishes the Rényi divergence as the appropriate measure of conditional information gain. To mitigate the bias and variance of nested Monte Carlo estimators needed to estimate Sibson's $α$-MI, we adopt a PAC-Bayes framework to search over stochastic design policies, yielding rigorous high-probability lower bounds on the robust expected information gain that explicitly control finite-sample error.