Asymptotic Variance in the Central Limit Theorem for Multilevel Markovian Stochastic Approximation

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes a central limit theorem (CLT) for the asymptotic variance of multilevel Markov chain stochastic approximation (MRA) algorithms in Bayesian inverse problems governed by differential equations. The core challenge lies in quantifying estimation variance when finite-dimensional parameter inference relies on numerical PDE/ODE solvers whose discretization errors obscure variance characterization. We derive, for the first time, a quantitative upper bound linking the asymptotic variance to numerical discretization error. Our approach integrates multilevel PDE discretizations, Markov chain stochastic approximation, and asymptotic statistical analysis, with adaptive parameter selection via marginal likelihood optimization. The derived bound provides rigorous theoretical guidance for tuning key simulation parameters—including level count, sample size per level, and mesh resolution—in multilevel stochastic gradient methods. This result bridges a critical theoretical gap in the efficient implementation of multilevel stochastic optimization and substantially enhances both computational efficiency and statistical reliability in Bayesian inference for inverse problems.

Technology Category

Application Category

📝 Abstract
In this note we consider the finite-dimensional parameter estimation problem associated to inverse problems. In such scenarios, one seeks to maximize the marginal likelihood associated to a Bayesian model. This latter model is connected to the solution of partial or ordinary differential equation. As such, there are two primary difficulties in maximizing the marginal likelihood (i) that the solution of differential equation is not always analytically tractable and (ii) neither is the marginal likelihood. Typically (i) is dealt with using a numerical solution of the differential equation, leading to a numerical bias and (ii) has been well studied in the literature using, for instance, Markovian stochastic approximation. It is well-known that to reduce the computational effort to obtain the maximal value of the parameter, one can use a hierarchy of solutions of the differential equation and combine with stochastic gradient methods. Several approaches do exactly this. In this paper we consider the asymptotic variance in the central limit theorem, associated to known estimates and find bounds on the asymptotic variance in terms of the precision of the solution of the differential equation. The significance of these bounds are the that they provide missing theoretical guidelines on how to set simulation parameters; that is, these appear to be the first mathematical results which help to run the methods efficiently in practice.
Problem

Research questions and friction points this paper is trying to address.

Estimating parameters in inverse problems with differential equations
Maximizing marginal likelihood with numerical and stochastic challenges
Bounding asymptotic variance to guide efficient simulation parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multilevel Markovian stochastic approximation
Bounds asymptotic variance via differential equation precision
Provides theoretical guidelines for simulation parameters
🔎 Similar Papers
No similar papers found.
Ajay Jasra
Ajay Jasra
Professor of Data Science, The Chinese University of Hong Kong, Shenzhen
Particle FilteringMarkov ChainsMultilevel Monte CarloNon-Linear Filtering
A
Abylay Zhumekenov
School of Data Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen, CN.