๐ค AI Summary
Neural Large Neighborhood Search (LNS) for Integer Linear Programming (ILP) suffers from premature convergence to local optima and low sample efficiency. To address these limitations, this paper proposes a stochastic processโbased framework. Methodologically, we formulate LNS as a Markov Decision Process (MDP) and introduce a local-gradient-guided stochastic sampling mechanism to enhance exploration. We further design a backtracking-based hindsight relabeling strategy to generate high-quality self-supervised training signals. Finally, we integrate neural prediction, dynamic sampling, and the LNS paradigm into a unified architecture to improve solution quality and generalization. Evaluated on multi-scale ILP benchmarks, our approach consistently outperforms existing neural LNS solvers: it achieves an average 12.7% improvement in solution quality and a 2.3ร increase in training sample efficiency.
๐ Abstract
Large Neighborhood Search (LNS) is a common heuristic in combinatorial optimization that iteratively searches over a large neighborhood of the current solution for a better one. Recently, neural network-based LNS solvers have achieved great success in solving Integer Linear Programs (ILPs) by learning to greedily predict the locally optimal solution for the next neighborhood proposal. However, this greedy approach raises two key concerns: (1) to what extent this greedy proposal suffers from local optima, and (2) how can we effectively improve its sample efficiency in the long run. To address these questions, this paper first formulates LNS as a stochastic process, and then introduces SPL-LNS, a sampling-enhanced neural LNS solver that leverages locally-informed proposals to escape local optima. We also develop a novel hindsight relabeling method to efficiently train SPL-LNS on self-generated data. Experimental results demonstrate that SPL-LNS substantially surpasses prior neural LNS solvers for various ILP problems of different sizes.