🤖 AI Summary
In unsupervised neural combinatorial optimization, efficiently sampling from discrete solution spaces remains challenging, as existing methods rely on exact likelihood computation—rendering them incompatible with highly expressive latent-variable models such as diffusion models.
Method: This work introduces diffusion models to data-free combinatorial optimization for the first time, proposing a novel training objective based on an upper bound of the reverse KL divergence that eliminates the need for exact likelihood evaluation. We further design discrete-structure-aware embedding representations and a tailored denoising process, enabling direct learning of high-quality solution distributions without supervision.
Results: Our approach achieves significant improvements over state-of-the-art methods across multiple benchmark combinatorial optimization tasks, demonstrating superior solution quality, sampling efficiency, and cross-problem generalization—all under a fully unsupervised setting.
📝 Abstract
Learning to sample from intractable distributions over discrete sets without relying on corresponding training data is a central problem in a wide range of fields, including Combinatorial Optimization. Currently, popular deep learning-based approaches rely primarily on generative models that yield exact sample likelihoods. This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models like diffusion models. Our approach is conceptually based on a loss that upper bounds the reverse Kullback-Leibler divergence and evades the requirement of exact sample likelihoods. We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.