Faster logconcave sampling from a cold start in high dimension

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the cold-start bottleneck in sampling from high-dimensional log-concave distributions: it achieves the first sub-cubic time complexity for both warm-up and sampling, assuming only a density evaluation oracle. Prior work established an Ω(n³) lower bound—even for the special case of uniform sampling over convex bodies. To break this barrier, we introduce three key innovations: (1) relaxing the initial warm-start requirement from ∞-Rényi divergence to q-Rényi divergence with q = Õ(1); (2) generalizing the Lee–Vempala log-Sobolev inequality, bounding convergence rate via the geometric mean of the covariance matrix’s largest eigenvalue and the body’s diameter; and (3) integrating isotropic preprocessing, geometric spectral analysis, and Rényi divergence–driven MCMC. Our algorithm attains O(n^{2.99}) sampling complexity—the first sub-cubic algorithm for high-dimensional log-concave sampling—and yields, as a corollary, the first sub-cubic solution for uniform sampling over convex bodies.

Technology Category

Application Category

📝 Abstract
We present a faster algorithm to generate a warm start for sampling an arbitrary logconcave density specified by an evaluation oracle, leading to the first sub-cubic sampling algorithms for inputs in (near-)isotropic position. A long line of prior work incurred a warm-start penalty of at least linear in the dimension, hitting a cubic barrier, even for the special case of uniform sampling from convex bodies. Our improvement relies on two key ingredients of independent interest. (1) We show how to sample given a warm start in weaker notions of distance, in particular $q$-R'enyi divergence for $q=widetilde{mathcal{O}}(1)$, whereas previous analyses required stringent $infty$-R'enyi divergence (with the exception of Hit-and-Run, whose known mixing time is higher). This marks the first improvement in the required warmness since Lov'asz and Simonovits (1991). (2) We refine and generalize the log-Sobolev inequality of Lee and Vempala (2018), originally established for isotropic logconcave distributions in terms of the diameter of the support, to logconcave distributions in terms of a geometric average of the support diameter and the largest eigenvalue of the covariance matrix.
Problem

Research questions and friction points this paper is trying to address.

Faster sampling from high-dimensional logconcave densities
Reducing warm-start penalty in dimension for sampling
Improving warmness requirements using q-Rényi divergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Faster warm start for logconcave sampling
Sampling with weaker distance notions
Refined log-Sobolev inequality generalization
🔎 Similar Papers
Yunbum Kook
Yunbum Kook
Georgia Tech
sampling
S
S. Vempala
Georgia Tech