On Distributional Dependent Performance of Classical and Neural Routing Solvers

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural combinatorial optimization (NCO) methods for routing problems suffer from limited generalization, lagging behind specialized metaheuristics in solution quality. Method: We propose a novel paradigm for generating large-scale, structured problem instances based on fixed backbone node distributions, enabling systematic modeling of intrinsic training/test data distribution characteristics. Neural solvers are trained on this controlled distribution and rigorously benchmarked against classical operations research methods—including LKH and ant colony optimization (ACO)—under distributionally consistent evaluation protocols. Contribution/Results: We demonstrate that distribution consistency significantly enhances out-of-distribution generalization: the average optimality gap narrows by 30–50% on unseen instances. Our key contribution is the first rigorous identification of data distribution as a critical determinant of NCO performance, coupled with the development of the first reproducible, structured benchmark generation framework specifically designed for routing problems. This work provides both theoretical insights and practical guidelines for the reliable deployment of neural solvers in combinatorial optimization.

Technology Category

Application Category

📝 Abstract
Neural Combinatorial Optimization aims to learn to solve a class of combinatorial problems through data-driven methods and notably through employing neural networks by learning the underlying distribution of problem instances. While, so far neural methods struggle to outperform highly engineered problem specific meta-heuristics, this work explores a novel approach to formulate the distribution of problem instances to learn from and, more importantly, plant a structure in the sampled problem instances. In application to routing problems, we generate large problem instances that represent custom base problem instance distributions from which training instances are sampled. The test instances to evaluate the methods on the routing task consist of unseen problems sampled from the underlying large problem instance. We evaluate representative NCO methods and specialized Operation Research meta heuristics on this novel task and demonstrate that the performance gap between neural routing solvers and highly specialized meta-heuristics decreases when learning from sub-samples drawn from a fixed base node distribution.
Problem

Research questions and friction points this paper is trying to address.

Learning to solve combinatorial problems via neural networks
Exploring distribution formulation for routing problem instances
Reducing performance gap between neural and meta-heuristic solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns from custom base problem distributions
Structures sampled problem instances for training
Uses sub-samples from fixed base node distribution
D
Daniela Thyssens
Information Systems and ML Lab, University of Hildesheim, Hildesheim, Germany
T
Tim Dernedde
Information Systems and ML Lab, University of Hildesheim, Hildesheim, Germany
W
Wilson Sentanoe
University of Hildesheim, Hildesheim, Germany
Lars Schmidt-Thieme
Lars Schmidt-Thieme
University of Hildesheim, Germany
machine learning