🤖 AI Summary
Bayesian evidence estimation—i.e., computing the high-dimensional likelihood integral ψ = 𝔼_F[l(X)]—is a fundamental challenge in model selection and uncertainty quantification. To address this, we propose a novel method that integrates the Yakowitz Riemann sum estimator with Skilling’s nested sampling framework—the first incorporation of Riemann summation into nested sampling—achieving a convergence rate of O(n⁻⁴), which surpasses the classical Central Limit Theorem (CLT) rate. We further introduce a Lorenz-curve-based likelihood quantile modeling strategy, circumventing the intractable Λ-function bottleneck inherent in conventional vertical representations. Theoretically, we establish strict convergence guarantees superior to those under ergodic CLT assumptions. Numerical experiments demonstrate that our approach delivers higher accuracy and greater robustness in high-dimensional evidence estimation, significantly enhancing the practical efficacy of nested sampling.
📝 Abstract
In Bayesian inference, the approximation of integrals of the form $psi = mathbb{E}_{F}{l(X)} = int_{chi} l(mathbf{x}) d F(mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $Lambda$ functions, essential for the vertical representation and nested sampling.