Alternating Diffusion for Proximal Sampling with Zeroth Order Queries

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel proximal sampler for settings where only zeroth-order information of the potential function is accessible. The method alternates between simulating forward and backward diffusion dynamics of heat flow, modeling the intermediate particle distribution as a Gaussian mixture and constructing a Monte Carlo score estimator that admits direct sampling. In contrast to conventional rejection sampling, the proposed approach features deterministic runtime, supports flexible step sizes, and maintains exponential convergence under multi-particle interactions and parallel computation. When the target distribution satisfies an isoperimetric condition, the algorithm inherits the exponential convergence rate characteristic of proximal samplers. Numerical experiments confirm its ability to rapidly converge to the target distribution.

Technology Category

Application Category

📝 Abstract
This work introduces a new approximate proximal sampler that operates solely with zeroth-order information of the potential function. Prior theoretical analyses have revealed that proximal sampling corresponds to alternating forward and backward iterations of the heat flow. The backward step was originally implemented by rejection sampling, whereas we directly simulate the dynamics. Unlike diffusion-based sampling methods that estimate scores via learned models or by invoking auxiliary samplers, our method treats the intermediate particle distribution as a Gaussian mixture, thereby yielding a Monte Carlo score estimator from directly samplable distributions. Theoretically, when the score estimation error is sufficiently controlled, our method inherits the exponential convergence of proximal sampling under isoperimetric conditions on the target distribution. In practice, the algorithm avoids rejection sampling, permits flexible step sizes, and runs with a deterministic runtime budget. Numerical experiments demonstrate that our approach converges rapidly to the target distribution, driven by interactions among multiple particles and by exploiting parallel computation.
Problem

Research questions and friction points this paper is trying to address.

proximal sampling
zeroth-order queries
diffusion-based sampling
score estimation
Monte Carlo methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

zeroth-order sampling
proximal sampling
alternating diffusion
score estimation
Gaussian mixture approximation
🔎 Similar Papers
No similar papers found.
H
Hirohane Takagi
Graduate School of Information Science and Technology, The University of Tokyo, Japan
Atsushi Nitanda
Atsushi Nitanda
CFAR, A*STAR / Nanyang Technological University
Machine learningStochastic optimizationSampling