🤖 AI Summary
This paper addresses the exponential error growth of nested importance sampling (nested IS) in Bayesian inference for high-dimensional models, caused by a large number of nuisance variables. We propose an analytical framework that replaces analytical marginalization with numerical integration. Under regularity conditions, we establish—for the first time—that the approximation error grows only polynomially (rather than exponentially) in the dimension of nuisance variables, and achieves uniformly bounded error in the zeroth-order case. Our approach integrates tools from functional analysis, measure theory, and sequential Monte Carlo methods—including nested particle filtering and IS²—and applies to linear Gaussian models and models with bounded observation functions. This work provides the first theoretical error bound characterizing the dimensionality robustness of nested IS, thereby significantly advancing the understanding of how IS-based methods mitigate the curse of dimensionality.
📝 Abstract
Many Bayesian inference problems involve high dimensional models for which only a subset of the model variables are actual estimation targets. All other variables are just nuisance variables that one would ideally like to integrate out analytically. Unfortunately, such integration is often impossible. However, there are several computational methods that have been proposed over the past 15 years that replace intractable analytical marginalisation by numerical integration, typically using different flavours of importance sampling (IS). Such methods include particle Markov chain Monte Carlo, sequential Monte Carlo squared (SMC$^2$), IS$^2$, nested particle filters and others. In this paper, we investigate the role of the dimension of the nuisance variables in the error bounds achieved by nested IS methods in Bayesian inference. We prove that, under suitable regularity assumptions on the model, the approximation errors increase at a polynomial (rather than exponential) rate with respect to the dimension of the nuisance variables. Our analysis relies on tools from functional analysis and measure theory and it includes the case of polynomials of degree zero, where the approximation error remains uniformly bounded as the dimension of the nuisance variables increases without bound. We also show how the general analysis can be applied to specific classes of models, including linear and Gaussian settings, models with bounded observation functions, and others. These findings improve our current understanding of when and how IS can overcome the curse of dimensionality in Bayesian inference problems.