🤖 AI Summary
This work investigates high-accuracy sampling from log-concave distributions under stochastic noise, either in gradient or zeroth-order oracle queries. Focusing on stochastic gradients with sub-exponential (light) tails, the study proposes an efficient sampling algorithm and establishes, for the first time, that polylog(1/δ) query complexity is achievable under this condition—significantly improving upon conventional methods. The analysis further uncovers a fundamental distinction between sampling and convex optimization in the presence of stochasticity. An information-theoretic lower bound demonstrates the necessity of the light-tailed assumption, and the work shows that under merely bounded-variance but heavy-tailed noise, the optimal query complexity degrades to Θ(1/δ).
📝 Abstract
We show that high-accuracy guarantees for log-concave sampling -- that is, iteration and query complexities which scale as $\mathrm{poly}\log(1/\delta)$, where $\delta$ is the desired target accuracy -- are achievable using stochastic gradients with subexponential tails. Notably, this exhibits a separation with the problem of convex optimization, where stochasticity (even additive Gaussian noise) in the gradient oracle incurs $\mathrm{poly}(1/\delta)$ queries. We also give an information-theoretic argument that light-tailed stochastic gradients are necessary for high accuracy: for example, in the bounded variance case, we show that the minimax-optimal query complexity scales as $\Theta(1/\delta)$. Our framework also provides similar high accuracy guarantees under stochastic zeroth order (value) queries.