🤖 AI Summary
Diffusion models suffer from low sampling efficiency, limited sample quality, and difficulty adapting to arbitrary unnormalized target densities. Method: We propose a training-free offline sampling optimization framework. Its core innovation lies in a synergistic exploration strategy combining local search in the target space with a replay buffer, integrating generative flow networks (GFNs), variational inference, and diffusion-structured modeling to enable efficient, high-fidelity sampling from energy-based distributions. The method requires no modification or retraining of pretrained diffusion models. Contribution/Results: Evaluated on diverse non-Gaussian, multimodal, and geometry-constrained distributions, our approach improves sample fidelity and diversity under offline settings, achieving an average 23.6% FID reduction. We release a unified benchmark codebase to advance scalable, fine-tuning-free diffusion-based inference research.
📝 Abstract
We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at https://github.com/GFNOrg/gfn-diffusion as a base for future work on diffusion models for amortized inference.