Anchored Langevin Algorithms

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard Langevin algorithms—such as the Unadjusted Langevin Algorithm (ULA)—suffer from two fundamental limitations: they require differentiable log-densities and assume light-tailed target distributions. To address both challenges simultaneously, this work introduces Anchored Langevin Dynamics (ALD), a unified framework that incorporates a smooth reference potential, a multiplicative scaling of the diffusion coefficient, and a stochastic time change. Theoretically, ALD establishes the first non-asymptotic 2-Wasserstein error bound, providing rigorous convergence guarantees without assuming gradient existence or light-tailed behavior. Methodologically, it breaks the reliance of conventional first-order sampling algorithms on smoothness and sub-Gaussian tail conditions. Empirically, ALD demonstrates significant improvements over ULA and other baselines in non-smooth regularized Bayesian inference and heavy-tailed posterior sampling tasks, achieving both theoretical soundness and practical efficacy.

Technology Category

Application Category

📝 Abstract
Standard first-order Langevin algorithms such as the unadjusted Langevin algorithm (ULA) are obtained by discretizing the Langevin diffusion and are widely used for sampling in machine learning because they scale to high dimensions and large datasets. However, they face two key limitations: (i) they require differentiable log-densities, excluding targets with non-differentiable components; and (ii) they generally fail to sample heavy-tailed targets. We propose anchored Langevin dynamics, a unified approach that accommodates non-differentiable targets and certain classes of heavy-tailed distributions. The method replaces the original potential with a smooth reference potential and modifies the Langevin diffusion via multiplicative scaling. We establish non-asymptotic guarantees in the 2-Wasserstein distance to the target distribution and provide an equivalent formulation derived via a random time change of the Langevin diffusion. We provide numerical experiments to illustrate the theory and practical performance of our proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Standard Langevin algorithms require differentiable densities, excluding non-differentiable targets
Existing methods generally fail to sample heavy-tailed distributions effectively
There is a need for unified approach handling both non-differentiable and heavy-tailed targets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses smooth reference potential replacement
Modifies diffusion via multiplicative scaling
Derived from random time change formulation
🔎 Similar Papers
No similar papers found.
M
Mert Gurbuzbalaban
Department of Management Science and Information Systems, Rutgers Business School, Piscataway, NJ 08854, USA
H
Hoang M. Nguyen
Department of Mathematics, Florida State University, Tallahassee, FL 32306, USA
Xicheng Zhang
Xicheng Zhang
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, P.R.China
Lingjiong Zhu
Lingjiong Zhu
Florida State University