🤖 AI Summary
To address the intractability or high computational cost of gradient evaluation for target distributions in score-based generative models, this paper proposes the first ensemble-based zeroth-order sampling framework that entirely bypasses explicit gradient computation of the noise-conditional score function. Our method tightly integrates ensemble perturbation estimation with score-based modeling and employs a Langevin-type gradient-free update rule; we theoretically establish its convergence and demonstrate substantial variance reduction in score estimation. On standard benchmarks—including CIFAR-10 and CelebA-HQ—our approach achieves FID and Inception Score (IS) competitive with state-of-the-art gradient-based methods, while significantly improving sampling robustness and reducing computational overhead by 23%. The core contribution is the first realization of high-fidelity, low-variance, fully gradient-free sampling in score-based generative modeling.