🤖 AI Summary
This work addresses the challenge of efficient, unbiased sampling from complex unnormalized distributions over discrete domains. We propose a novel training paradigm that integrates policy gradient optimization with self-normalized neural importance sampling (SN-NIS). To our knowledge, this is the first method enabling memory-scalable training of discrete diffusion models while guaranteeing strictly unbiased sampling. By coupling SN-NIS with neural Markov chain Monte Carlo (Neural MCMC), we substantially alleviate the memory and maximum-step bottlenecks inherent in conventional discrete diffusion approaches. On Ising model benchmarks, our method outperforms state-of-the-art autoregressive baselines. In unsupervised combinatorial optimization tasks, it achieves large-scale, high-fidelity, and unbiased discrete generation. Our core contribution is the establishment of the first discrete diffusion framework that simultaneously supports scalable training and theoretically guaranteed unbiasedness—bridging a critical gap between practical efficiency and statistical correctness in discrete generative modeling.
📝 Abstract
Learning to sample from complex unnormalized distributions over discrete domains emerged as a promising research direction with applications in statistical physics, variational inference, and combinatorial optimization. Recent work has demonstrated the potential of diffusion models in this domain. However, existing methods face limitations in memory scaling and thus the number of attainable diffusion steps since they require backpropagation through the entire generative process. To overcome these limitations we introduce two novel training methods for discrete diffusion samplers, one grounded in the policy gradient theorem and the other one leveraging Self-Normalized Neural Importance Sampling (SN-NIS). These methods yield memory-efficient training and achieve state-of-the-art results in unsupervised combinatorial optimization. Numerous scientific applications additionally require the ability of unbiased sampling. We introduce adaptations of SN-NIS and Neural Markov Chain Monte Carlo that enable for the first time the application of discrete diffusion models to this problem. We validate our methods on Ising model benchmarks and find that they outperform popular autoregressive approaches. Our work opens new avenues for applying diffusion models to a wide range of scientific applications in discrete domains that were hitherto restricted to exact likelihood models.