A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization

📅 2024-06-03
🏛️ International Conference on Machine Learning
📈 Citations: 28
Influential: 4
📄 PDF
🤖 AI Summary
In unsupervised neural combinatorial optimization, efficiently sampling from discrete solution spaces remains challenging, as existing methods rely on exact likelihood computation—rendering them incompatible with highly expressive latent-variable models such as diffusion models. Method: This work introduces diffusion models to data-free combinatorial optimization for the first time, proposing a novel training objective based on an upper bound of the reverse KL divergence that eliminates the need for exact likelihood evaluation. We further design discrete-structure-aware embedding representations and a tailored denoising process, enabling direct learning of high-quality solution distributions without supervision. Results: Our approach achieves significant improvements over state-of-the-art methods across multiple benchmark combinatorial optimization tasks, demonstrating superior solution quality, sampling efficiency, and cross-problem generalization—all under a fully unsupervised setting.

Technology Category

Application Category

📝 Abstract
Learning to sample from intractable distributions over discrete sets without relying on corresponding training data is a central problem in a wide range of fields, including Combinatorial Optimization. Currently, popular deep learning-based approaches rely primarily on generative models that yield exact sample likelihoods. This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models like diffusion models. Our approach is conceptually based on a loss that upper bounds the reverse Kullback-Leibler divergence and evades the requirement of exact sample likelihoods. We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
Problem

Research questions and friction points this paper is trying to address.

Learning to sample from intractable discrete distributions without training data
Introducing diffusion models for unsupervised combinatorial optimization tasks
Overcoming the limitation of requiring exact sample likelihoods in generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for combinatorial optimization
Employs latent variable models without exact likelihoods
Bounds reverse KL divergence for unsupervised learning
🔎 Similar Papers
No similar papers found.