🤖 AI Summary
To address computational redundancy in iterative heuristics for the Vehicle Routing Problem (VRP)—caused by numerous stable subpaths persisting across iterations—this paper proposes the “First-Segmentation-Then-Aggregation” acceleration framework, FSTA. We formally define FSTA decomposition and introduce L2Seg, a learning-based segmentation model that jointly leverages non-autoregressive and autoregressive modeling to accurately identify stable versus unstable path segments within VRP solutions. We design three deep neural network variants, each accompanied by tailored training and inference strategies, ensuring compatibility with classical, learning-augmented, and hybrid solvers, as well as support for diverse VRP variants including CVRP and VRPTW. Experiments demonstrate that L2Seg accelerates state-of-the-art iterative solvers by up to 7×, while maintaining strong generalizability, extensibility, and plug-and-play usability.
📝 Abstract
Iterative search heuristics are widely recognized as state-of-the-art for solving Vehicle Routing Problems (VRPs). In this work, we identify and exploit a critical observation: within these solvers, a large portion of the solution remains stable, i.e., unchanged across search iterations, causing redundant computations, especially for large-scale VRPs with long subtours. To address this, we pioneer the formal study of the First-Segment-Then-Aggregate (FSTA) decomposition technique to accelerate iterative solvers. Specifically, FSTA preserves stable solution segments during the search, aggregates nodes within each segment into fixed hypernodes, and focuses the search only on unstable portions. Yet, a key challenge lies in identifying which segments should be aggregated by FSTA. To this end, we then introduce Learning-to-Segment (L2Seg), a novel neural framework to intelligently differentiate potentially stable and unstable portions for FSTA decomposition. We present three L2Seg variants: non-autoregressive (globally comprehensive but locally indiscriminate), autoregressive (locally refined but globally deficient), and their synergy, with bespoke training and inference strategies. Empirical results on CVRP and VRPTW suggest that L2Seg accelerates state-of-the-art iterative solvers by up to 7x. Additionally, we provide in-depth analysis showing NAR and AR synergy achieves best performance by combining their complementary strengths. Notably, L2Seg is a flexible framework that is compatible with traditional, learning-based, and hybrid solvers, while supporting a broad class of VRPs.