🤖 AI Summary
To address the local convergence and susceptibility to suboptimal solutions inherent in ELBO optimization for variational inference, this paper proposes a globally convergent neural posterior estimation method based on minimizing the forward KL divergence to fit a neural-network-parameterized posterior. Theoretically, it establishes, for the first time within the neural posterior estimation framework, the global convergence of the variational objective—leveraging the neural tangent kernel (NTK) to characterize gradient dynamics in function space and proving uniqueness and convergence of the solution in a reproducing kernel Hilbert space (RKHS). Empirically, the method significantly outperforms ELBO-based baselines across multiple tasks, effectively avoiding shallow local optima. Moreover, the theoretical guarantees are validated on finite-width networks, leading to improved accuracy and robustness of posterior approximation.
📝 Abstract
In variational inference (VI), an approximation of the posterior distribution is selected from a family of distributions through numerical optimization. With the most common variational objective function, known as the evidence lower bound (ELBO), only convergence to a local optimum can be guaranteed. In this work, we instead establish the global convergence of a particular VI method. This VI method, which may be considered an instance of neural posterior estimation (NPE), minimizes an expectation of the inclusive (forward) KL divergence to fit a variational distribution that is parameterized by a neural network. Our convergence result relies on the neural tangent kernel (NTK) to characterize the gradient dynamics that arise from considering the variational objective in function space. In the asymptotic regime of a fixed, positive-definite neural tangent kernel, we establish conditions under which the variational objective admits a unique solution in a reproducing kernel Hilbert space (RKHS). Then, we show that the gradient descent dynamics in function space converge to this unique function. In ablation studies and practical problems, we demonstrate that our results explain the behavior of NPE in non-asymptotic finite-neuron settings, and show that NPE outperforms ELBO-based optimization, which often converges to shallow local optima.