🤖 AI Summary
For large-scale Capacitated Vehicle Routing Problems with Time Windows (CVRPTW), the Elementary Shortest Path Problem with Resource Constraints (ESPPRC)—the pricing subproblem in column generation—becomes computationally intractable due to the exponential growth of the underlying graph size. This paper proposes an unsupervised Graph Neural Network (GNN)-based graph reduction method that learns arc retention probabilities end-to-end, automatically identifying and preserving high-potential arcs to construct a computationally tractable reduced pricing subgraph. To our knowledge, this is the first work to employ unsupervised GNNs for ESPPRC graph structure reduction, eliminating hand-crafted heuristics, and integrating local search to accelerate column generation convergence. Experiments demonstrate that, under fixed computational budgets, the approach improves objective values by over 9%, significantly speeds up convergence, and exhibits strong generalization across diverse CVRPTW instance classes.
📝 Abstract
Column Generation (CG) is a popular method dedicated to enhancing computational efficiency in large scale Combinatorial Optimization (CO) problems. It reduces the number of decision variables in a problem by solving a pricing problem. For many CO problems, the pricing problem is an Elementary Shortest Path Problem with Resource Constraints (ESPPRC). Large ESPPRC instances are difficult to solve to near-optimality. Consequently, we use a Graph neural Network (GNN) to reduces the size of the ESPPRC such that it becomes computationally tractable with standard solving techniques. Our GNN is trained by Unsupervised Learning and outputs a distribution for the arcs to be retained in the reduced PP. The reduced PP is solved by a local search that finds columns with large reduced costs and speeds up convergence. We apply our method on a set of Capacitated Vehicle Routing Problems with Time Windows and show significant improvements in convergence compared to simple reduction techniques from the literature. For a fixed computational budget, we improve the objective values by over 9% for larger instances. We also analyze the performance of our CG algorithm and test the generalization of our method to different classes of instances than the training data.