🤖 AI Summary
To address the high computational cost and low solving efficiency of diffusion models in combinatorial optimization (CO) caused by multi-step sampling, this paper proposes a novel training-to-test (T2T) single-step solving paradigm. Our method integrates diffusion modeling with optimization-consistent learning and gradient-driven latent-space refinement. Specifically, we introduce (1) an optimization-consistency training protocol that enables the model to directly map noisy inputs to high-quality solutions, bypassing iterative denoising; and (2) a consistency-aware gradient search mechanism at test time, which efficiently guides single-step solution refinement in the latent space. Evaluated on the Traveling Salesman Problem (TSP) and Maximum Independent Set (MIS), our approach achieves superior solution quality over state-of-the-art diffusion-based methods using only one generation step plus one gradient step. It accelerates inference by over an order of magnitude and outperforms LKH within constrained time budgets.
📝 Abstract
Diffusion models have recently advanced Combinatorial Optimization (CO) as a powerful backbone for neural solvers. However, their iterative sampling process requiring denoising across multiple noise levels incurs substantial overhead. We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots. This is achieved through an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. The proposed model enables fast single-step solution generation while retaining the option of multi-step sampling to trade for sampling quality, which offers a more effective and efficient alternative backbone for neural solvers. In addition, within the training-to-testing (T2T) framework, to bridge the gap between training on historical instances and solving new instances, we introduce a novel consistency-based gradient search scheme during the test stage, enabling more effective exploration of the solution space learned during training. It is achieved by updating the latent solution probabilities under objective gradient guidance during the alternation of noise injection and denoising steps. We refer to this model as Fast T2T. Extensive experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency, even outperforming LKH given limited time budgets. Notably, Fast T2T with merely one-step generation and one-step gradient search can mostly outperform the SOTA diffusion-based counterparts that require hundreds of steps, while achieving tens of times speedup.