Towards Reliable Simulation-based Inference

๐Ÿ“… 2026-03-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the issue of overconfidence in statistical inference arising from machine learning approximations in scientific simulations, which can compromise result reliability. To mitigate this, the authors propose two complementary approaches: first, a โ€œbalancedโ€ regularization strategy that explicitly suppresses model overconfidence; and second, a simulation-aware Bayesian neural network prior that naturally alleviates overconfidence without additional regularization, even in small-sample regimes. By integrating neural ratio estimation with uncertainty quantification techniques, the proposed methods significantly improve inference calibration, yielding posterior estimates that are either closer to the ground truth or conservatively biased. This enhanced calibration strengthens the credibility of simulation-based inference in scientific applications.

Technology Category

Application Category

๐Ÿ“ Abstract
Scientific knowledge expands by observing the world, hypothesizing some theories about it, and testing them against collected data. When those theories take the form of statistical models, statistical analyses are involved in the process of testing and refining scientific hypotheses. In this thesis, we focus on statistical models that take the form of scientific simulators and provide background about how machine learning can be used for statistical analyses in this context. The first part of this thesis is about showing empirically that performing statistical analyses with machine learning involves a degree of approximation. Specifically, all statistical analyses involve a level of uncertainty in the conclusions drawn, and we show that approximations can lead to overconfident conclusions. We draw caution regarding such overconfident conclusions and introduce a criterion to diagnose overconfident approximations. In the second part, we introduce balancing, a way to regularize machine learning models to reduce overconfidence and favor calibrated or underconfident approximations. Balancing is first introduced for neural ratio estimation algorithms and then extended to other algorithms. Intuition about why balancing leads to less overconfident solutions is provided, and it is shown empirically that balanced algorithms are often either close to calibrated or underconfident. The third part shows that Bayesian neural networks can also be used to mitigate the overconfidence of approximations. Unlike balancing, no regularization is required, and this solution can then work with few training samples and, hence, computationally expensive simulators. To that end, a new Bayesian neural network prior tailored for simulation-based inference is developed, and empirical results show a reduction in overconfidence compared to similar solutions without Bayesian neural networks.
Problem

Research questions and friction points this paper is trying to address.

simulation-based inference
overconfidence
statistical approximation
machine learning
calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

simulation-based inference
balancing
Bayesian neural networks
overconfidence mitigation
calibrated inference
๐Ÿ”Ž Similar Papers
No similar papers found.