🤖 AI Summary
This work addresses efficient sampling from non-log-concave distributions (density ∝ e⁻ⱽ), focusing on reducing gradient query complexity and characterizing the Poincaré constant. Methodologically, it departs from the standard L-smoothness assumption, introducing a stronger Ornstein–Uhlenbeck-type smoothness condition, coupled with sub-Gaussian moment constraints and diffusion process analysis. Theoretically, this framework reduces the gradient query complexity from exponential to polynomial—specifically, poly(d, 1/ε)—under mild assumptions (L = O(1), M = poly(d)). The main contributions are threefold: (i) establishing a quantitative link between enhanced smoothness and improved sampling efficiency; (ii) deriving tighter upper bounds on the Poincaré constant for non-convex, mixture-of-Gaussians-like distributions; and (iii) proposing a new sampling paradigm for non-log-concave targets that simultaneously offers rigorous theoretical guarantees and practical promise.
📝 Abstract
We study the problem of sampling from a distribution $μ$ with density $propto e^{-V}$ for some potential function $V:mathbb R^d o mathbb R$ with query access to $V$ and $
abla V$. We start with the following standard assumptions:
(1) The potential function $V$ is $L$-smooth.
(2) The second moment $mathbf{E}_{Xsim μ}[|X|^2]leq M$.
Recently, He and Zhang (COLT'25) showed that the query complexity of sampling from such distributions is at least $left(frac{LM}{dε}
ight)^{Ω(d)}$ where $ε$ is the desired accuracy in total variation distance, and the Poincaré constant can be arbitrarily large.
Meanwhile, another common assumption in the study of diffusion based samplers (see e.g., the work of Chen, Chewi, Li, Li, Salim and Zhang (ICLR'23)) strengthens the smoothness condition (1) to the following:
(1*) The potential function of *every* distribution along the Ornstein-Uhlenbeck process starting from $μ$ is $L$-smooth.
We show that under the assumptions (1*) and (2), the query complexity of sampling from $μ$ can be $mathrm{poly}(L,d)cdot left(frac{Ld+M}{ε^2}
ight)^{mathcal{O}(L+1)}$, which is polynomial in $d$ and $frac{1}ε$ when $L=mathcal{O}(1)$ and $M=mathrm{poly}(d)$. This improves the algorithm with quasi-polynomial query complexity developed by Huang et al. (COLT'24). Our results imply that the seemly moderate strengthening of the smoothness condition (1) to (1*) can lead to an exponential gap in the query complexity of sampling algorithms.
Moreover, we show that together with the assumption (1*) and the stronger moment assumption that $|X|$ is $λ$-sub-Gaussian for $Xsimμ$, the Poincaré constant of $μ$ is at most $mathcal{O}(λ)^{2(L+1)}$. As an application of our technique, we obtain improved estimate of the Poincaré constant for mixture of Gaussians with the same covariance.