🤖 AI Summary
This work addresses sampling from log-concave distributions whose gradients exhibit (potentially) superlinear growth, within the underdamped Langevin dynamics framework. To overcome the limitation of conventional methods—which require globally Lipschitz gradients—we propose two discretization algorithms based on tailored taming strategies. These are the first kinetic Langevin samplers shown to simultaneously satisfy contraction in 2-Wasserstein distance and the logarithmic Sobolev inequality under *non*-globally-Lipschitz gradient conditions. Through rigorous non-asymptotic analysis, we derive explicit convergence bounds for both algorithms in the 2-Wasserstein metric. Theoretical results establish their fast and robust convergence to the target distribution, significantly broadening the applicability and theoretical foundation of kinetic Langevin sampling.
📝 Abstract
In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure.