On scalable and efficient training of diffusion samplers

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficient sampling from high-cost, high-dimensional, unnormalized energy distributions remains challenging in data-free settings. Method: This paper proposes the Searcher-Diffusion framework, integrating MCMC-based search with diffusion modeling. It introduces an auxiliary energy function to guide exploration of sparse modes and—novelly—identifies and mitigates primacy bias in diffusion sampler training via periodic parameter reinitialization to prevent mode collapse. The framework jointly optimizes off-policy and on-policy samples without requiring real data, relying solely on energy evaluations. Contribution/Results: On standard diffusion sampling benchmarks, the method achieves significant gains in sample efficiency. It attains state-of-the-art performance on high-dimensional synthetic tasks and real-world molecular conformation generation, demonstrating robustness and scalability in data-free energy-based sampling.

Technology Category

Application Category

📝 Abstract
We address the challenge of training diffusion models to sample from unnormalized energy distributions in the absence of data, the so-called diffusion samplers. Although these approaches have shown promise, they struggle to scale in more demanding scenarios where energy evaluations are expensive and the sampling space is high-dimensional. To address this limitation, we propose a scalable and sample-efficient framework that properly harmonizes the powerful classical sampling method and the diffusion sampler. Specifically, we utilize Monte Carlo Markov chain (MCMC) samplers with a novelty-based auxiliary energy as a Searcher to collect off-policy samples, using an auxiliary energy function to compensate for exploring modes the diffusion sampler rarely visits. These off-policy samples are then combined with on-policy data to train the diffusion sampler, thereby expanding its coverage of the energy landscape. Furthermore, we identify primacy bias, i.e., the preference of samplers for early experience during training, as the main cause of mode collapse during training, and introduce a periodic re-initialization trick to resolve this issue. Our method significantly improves sample efficiency on standard benchmarks for diffusion samplers and also excels at higher-dimensional problems and real-world molecular conformer generation.
Problem

Research questions and friction points this paper is trying to address.

Training diffusion models without data for sampling unnormalized energy distributions
Scaling diffusion samplers in high-dimensional spaces with costly energy evaluations
Addressing mode collapse due to primacy bias during sampler training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines MCMC with diffusion samplers for efficiency
Uses novelty-based auxiliary energy for exploration
Introduces periodic re-initialization to prevent mode collapse
M
Minkyu Kim
Korea Advanced Institute of Science and Technology (KAIST)
Kiyoung Seong
Kiyoung Seong
M.Sc. student, KAIST
AI for Science
Dongyeop Woo
Dongyeop Woo
KAIST
Machine learning
Sungsoo Ahn
Sungsoo Ahn
KAIST
Machine Learning
M
Minsu Kim
Korea Advanced Institute of Science and Technology (KAIST), Mila - Quebec AI Institute