Rethinking Neural Combinatorial Optimization for Vehicle Routing Problems with Different Constraint Tightness Degrees

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural combinatorial optimization (NCO) methods are trained and evaluated under fixed constraint values, exhibiting poor generalization to varying constraint tightness and severe overfitting. Method: Using the capacitated vehicle routing problem (CVRP) and its time-window variant (CVRPTW) as benchmarks, we systematically characterize, for the first time, the degradation mechanism of NCO models across the constraint-tightness dimension. We propose a tightness-aware training paradigm featuring: (i) explicit tightness encoding, (ii) a mixture-of-experts (MoE) collaborative decoding architecture, (iii) a tightness-adaptive loss function, and (iv) constraint-aware data augmentation. Contribution/Results: Experiments demonstrate that our approach significantly improves robustness across a wide range of capacity constraints—achieving an average 3.2% improvement in solution quality over state-of-the-art methods, extending the covered constraint-value span by 5×, and enabling, for the first time, strong cross-tightness generalization in NCO models.

Technology Category

Application Category

📝 Abstract
Recent neural combinatorial optimization (NCO) methods have shown promising problem-solving ability without requiring domain-specific expertise. Most existing NCO methods use training and testing data with a fixed constraint value and lack research on the effect of constraint tightness on the performance of NCO methods. This paper takes the capacity-constrained vehicle routing problem (CVRP) as an example to empirically analyze the NCO performance under different tightness degrees of the capacity constraint. Our analysis reveals that existing NCO methods overfit the capacity constraint, and they can only perform satisfactorily on a small range of the constraint values but poorly on other values. To tackle this drawback of existing NCO methods, we develop an efficient training scheme that explicitly considers varying degrees of constraint tightness and proposes a multi-expert module to learn a generally adaptable solving strategy. Experimental results show that the proposed method can effectively overcome the overfitting issue, demonstrating superior performances on the CVRP and CVRP with time windows (CVRPTW) with various constraint tightness degrees.
Problem

Research questions and friction points this paper is trying to address.

Analyzing NCO performance under varying capacity constraint tightness
Addressing overfitting of NCO methods to fixed constraint values
Developing adaptable training for diverse constraint tightness scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training scheme considers varying constraint tightness
Multi-expert module for adaptable solving strategy
Overcomes overfitting in neural combinatorial optimization
🔎 Similar Papers
No similar papers found.
Fu Luo
Fu Luo
Southern University of Science and Technology
Neural combinatorial optimization
Yaoxin Wu
Yaoxin Wu
Eindhoven University of Technology
Deep learningCombinatorial optimizationInteger programmingMulti-objective optimization
Z
Zhi Zheng
School of Computing, National University of Singapore, Singapore
Z
Zhenkun Wang
Guangdong Provincial Key Laboratory of Fully Actuated System Control Theory and Technology, School of Automation and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen, China