SPL-LNS: Sampling-Enhanced Large Neighborhood Search for Solving Integer Linear Programs

๐Ÿ“… 2025-08-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Neural Large Neighborhood Search (LNS) for Integer Linear Programming (ILP) suffers from premature convergence to local optima and low sample efficiency. To address these limitations, this paper proposes a stochastic processโ€“based framework. Methodologically, we formulate LNS as a Markov Decision Process (MDP) and introduce a local-gradient-guided stochastic sampling mechanism to enhance exploration. We further design a backtracking-based hindsight relabeling strategy to generate high-quality self-supervised training signals. Finally, we integrate neural prediction, dynamic sampling, and the LNS paradigm into a unified architecture to improve solution quality and generalization. Evaluated on multi-scale ILP benchmarks, our approach consistently outperforms existing neural LNS solvers: it achieves an average 12.7% improvement in solution quality and a 2.3ร— increase in training sample efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Neighborhood Search (LNS) is a common heuristic in combinatorial optimization that iteratively searches over a large neighborhood of the current solution for a better one. Recently, neural network-based LNS solvers have achieved great success in solving Integer Linear Programs (ILPs) by learning to greedily predict the locally optimal solution for the next neighborhood proposal. However, this greedy approach raises two key concerns: (1) to what extent this greedy proposal suffers from local optima, and (2) how can we effectively improve its sample efficiency in the long run. To address these questions, this paper first formulates LNS as a stochastic process, and then introduces SPL-LNS, a sampling-enhanced neural LNS solver that leverages locally-informed proposals to escape local optima. We also develop a novel hindsight relabeling method to efficiently train SPL-LNS on self-generated data. Experimental results demonstrate that SPL-LNS substantially surpasses prior neural LNS solvers for various ILP problems of different sizes.
Problem

Research questions and friction points this paper is trying to address.

Addresses greedy LNS local optima in ILP solving
Improves sample efficiency for neural LNS solvers
Develops sampling-enhanced method to escape local minima
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sampling-enhanced neural LNS solver
Locally-informed proposals escape local optima
Hindsight relabeling for efficient self-training
๐Ÿ”Ž Similar Papers
No similar papers found.