Latent Guided Sampling for Combinatorial Optimization

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural combinatorial optimization methods suffer from poor generalization, weak out-of-distribution (OOD) performance, and brittle inference when tackling NP-hard problems. Method: We propose a label-free, pretraining-free framework for latent-space modeling and inference. Our approach introduces instance-conditioned latent-space modeling and a novel Latent Guided Sampling (LGS) mechanism, integrating MCMC, stochastic approximation, and conditional encoding. Contribution/Results: Theoretically, we establish for the first time that LGS iterations form a non-homogeneous Markov chain and provide rigorous convergence guarantees. Empirically, on standard path-planning benchmarks, our method matches or exceeds the best reinforcement learning–based approaches, while significantly improving OOD generalization and solution stability—without requiring ground-truth labels or pretrained models.

Technology Category

Application Category

📝 Abstract
Combinatorial Optimization problems are widespread in domains such as logistics, manufacturing, and drug discovery, yet their NP-hard nature makes them computationally challenging. Recent Neural Combinatorial Optimization methods leverage deep learning to learn solution strategies, trained via Supervised or Reinforcement Learning (RL). While promising, these approaches often rely on task-specific augmentations, perform poorly on out-of-distribution instances, and lack robust inference mechanisms. Moreover, existing latent space models either require labeled data or rely on pre-trained policies. In this work, we propose LGS-Net, a novel latent space model that conditions on problem instances, and introduce an efficient inference method, Latent Guided Sampling (LGS), based on Markov Chain Monte Carlo and Stochastic Approximation. We show that the iterations of our method form a time-inhomogeneous Markov Chain and provide rigorous theoretical convergence guarantees. Empirical results on benchmark routing tasks show that our method achieves state-of-the-art performance among RL-based approaches.
Problem

Research questions and friction points this paper is trying to address.

Solving NP-hard combinatorial optimization challenges efficiently
Improving neural methods for out-of-distribution generalization
Developing label-free latent space models for optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent space model conditioning on problem instances
Efficient inference via Markov Chain Monte Carlo
Stochastic Approximation for robust convergence