🤖 AI Summary
This work addresses the challenge of dimension-dependent convergence bounds in existing discretized underdamped Langevin dynamics (ULD) under the Kullback–Leibler (KL) divergence, which typically scale polynomially with the dimension \(d\) and hinder applicability in high-dimensional settings. By refining the local error analysis framework in KL divergence, the paper establishes, for the first time, a dimension-free non-asymptotic convergence bound for discretized ULD. The resulting complexity depends on the trace of the Hessian of the potential function, \(\mathrm{tr}(H)\), rather than the ambient dimension \(d\). Leveraging a refined local error control combined with an upper bound on \(\mathrm{tr}(H)\), the method achieves significantly improved iteration complexity over overdamped approaches when \(\mathrm{tr}(H) \ll d\), yielding a tight and scalable guarantee for high-dimensional sampling.
📝 Abstract
Underdamped Langevin dynamics (ULD) is a widely-used sampler for Gibbs distributions $π\propto e^{-V}$, and is often empirically effective in high dimensions. However, existing non-asymptotic convergence guarantees for discretized ULD typically scale polynomially with the ambient dimension $d$, leading to vacuous bounds when $d$ is large. The main known dimension-free result concerns the randomized midpoint discretization in Wasserstein-2 distance (Liu et al.,2023), while dimension-independent guarantees for ULD discretizations in KL divergence have remained open. We close this gap by proving the first dimension-free KL divergence bounds for discretized ULD. Our analysis refines the KL local error framework (Altschuler et al., 2025) to a dimension-free setting and yields bounds that depend on $\mathrm{tr}(\mathbf{H})$, where $\mathbf{H}$ upper bounds the Hessian of $V$, rather than on $d$. As a consequence, we obtain improved iteration complexity for underdamped Langevin Monte Carlo relative to overdamped Langevin methods in regimes where $\mathrm{tr}(\mathbf{H})\ll d$.