An Efficient Diffusion-based Non-Autoregressive Solver for Traveling Salesman Problem

πŸ“… 2025-01-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
For the Traveling Salesman Problem (TSP), this paper introduces DEITSPβ€”a novel, efficient non-autoregressive diffusion model specifically designed for TSP. Departing from sequential generation paradigms, DEITSP pioneers a single-step controllable discrete noise diffusion mechanism, integrated with self-consistency enhancement and alternating denoising/noising iterations. It employs a dual-modality graph Transformer to jointly encode node-edge representations and introduces a progressive noise scheduling framework to jointly optimize solution quality and inference efficiency. Evaluated on both real-world and large-scale TSP benchmarks, DEITSP consistently outperforms existing neural solvers, achieving state-of-the-art performance in solution quality, inference latency, and generalization across unseen problem sizes. By unifying discrete diffusion with combinatorial structure modeling, DEITSP establishes a scalable, robust diffusion-based paradigm for combinatorial optimization.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in neural models have shown considerable promise in solving Traveling Salesman Problems (TSPs) without relying on much hand-crafted engineering. However, while non-autoregressive (NAR) approaches benefit from faster inference through parallelism, they typically deliver solutions of inferior quality compared to autoregressive ones. To enhance the solution quality while maintaining fast inference, we propose DEITSP, a diffusion model with efficient iterations tailored for TSP that operates in a NAR manner. Firstly, we introduce a one-step diffusion model that integrates the controlled discrete noise addition process with self-consistency enhancement, enabling optimal solution prediction through simultaneous denoising of multiple solutions. Secondly, we design a dual-modality graph transformer to bolster the extraction and fusion of features from node and edge modalities, while further accelerating the inference with fewer layers. Thirdly, we develop an efficient iterative strategy that alternates between adding and removing noise to improve exploration compared to previous diffusion methods. Additionally, we devise a scheduling framework to progressively refine the solution space by adjusting noise levels, facilitating a smooth search for optimal solutions. Extensive experiments on real-world and large-scale TSP instances demonstrate that DEITSP performs favorably against existing neural approaches in terms of solution quality, inference latency, and generalization ability. Our code is available at $href{https://github.com/DEITSP/DEITSP}{https://github.com/DEITSP/DEITSP}$.
Problem

Research questions and friction points this paper is trying to address.

Traveling Salesman Problem
Computational Speed
Solution Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel Computing
Iterative Optimization
Enhanced Solution Quality
πŸ”Ž Similar Papers
No similar papers found.
M
Mingzhao Wang
Jilin University, Changchun, China
Y
You Zhou
Jilin University, Changchun, China
Zhiguang Cao
Zhiguang Cao
Singapore Management University
Learning to OptimizeNeural Combinatorial OptimizationComputational Intelligence
Yubin Xiao
Yubin Xiao
Jilin University
Neural Combinatorial optimization
X
Xuan Wu
Jilin University, Changchun, China
W
Wei Pang
Heriot-Watt University, Edinburgh, United Kingdom
Yuan Jiang
Yuan Jiang
Nanyang Technological University
Large Language ModelsReinforcement LearningCombinatorial OptimizationOperations Research
H
Hui Yang
Jilin University, Changchun, China
P
Peng Zhao
Jilin University, Changchun, China
Y
Yuanshu Li
Jilin University, Changchun, China