🤖 AI Summary
This work aims to drastically reduce the number of iterations required for sampling from log-concave distributions using diffusion models while maintaining high accuracy (δ error). To this end, we propose a novel algorithm that, for the first time, achieves a sampling complexity of polylog(1/δ) under only Õ(δ)-accurate L² score estimation, breaking the previous linear or polynomial dependence on 1/δ. Key innovations include L²-accurate score estimation, analysis under non-uniform Lipschitz conditions, exploitation of intrinsic dimensionality, and efficient gradient evaluation. Under minimal assumptions, the algorithm attains a complexity of Õ(d·polylog(1/δ)); leveraging non-uniform Lipschitz constants reduces this to Õ(√(dL)·polylog(1/δ)), and further improvement to Õ(d*·polylog(1/δ)) is achieved when the target distribution exhibits low intrinsic dimension d*.
📝 Abstract
We present algorithms for diffusion model sampling which obtain $\delta$-error in $\mathrm{polylog}(1/\delta)$ steps, given access to $\widetilde O(\delta)$-accurate score estimates in $L^2$. This is an exponential improvement over all previous results. Specifically, under minimal data assumptions, the complexity is $\widetilde O(d\,\mathrm{polylog}(1/\delta))$ where $d$ is the dimension of the data; under a non-uniform $L$-Lipschitz condition, the complexity is $\widetilde O(\sqrt{dL}\,\mathrm{polylog}(1/\delta))$; and if the data distribution has intrinsic dimension $d_\star$, then the complexity reduces to $\widetilde O(d_\star\,\mathrm{polylog}(1/\delta))$. Our approach also yields the first $\mathrm{polylog}(1/\delta)$ complexity sampler for general log-concave distributions using only gradient evaluations.