Large Neighborhood Search meets Iterative Neural Constraint Heuristics

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a systematic integration of neural heuristics, such as ConsFormer, into the Large Neighborhood Search (LNS) framework for constraint satisfaction problems. By designing a novel destroy operator guided by model predictions and coupling it with either greedy or sampling-based repair strategies, the approach explicitly steers neighborhood selection. The method significantly outperforms the original neural solver on Sudoku, graph coloring, and MaxCut problems, surpassing both classical and neural baselines. Experimental results further reveal that the combination of random destruction and greedy repair exhibits consistent cross-task superiority, offering a new paradigm for synergistic neural-LNS design.

Technology Category

Application Category

📝 Abstract
Neural networks are being increasingly used as heuristics for constraint satisfaction. These neural methods are often recurrent, learning to iteratively refine candidate assignments. In this work, we make explicit the connection between such iterative neural heuristics and Large Neighborhood Search (LNS), and adapt an existing neural constraint satisfaction method-ConsFormer-into an LNS procedure. We decompose the resulting neural LNS into two standard components: the destroy and repair operators. On the destroy side, we instantiate several classical heuristics and introduce novel prediction-guided operators that exploit the model's internal scores to select neighborhoods. On the repair side, we utilize ConsFormer as a neural repair operator and compare the original sampling-based decoder to a greedy decoder that selects the most likely assignments. Through an empirical study on Sudoku, Graph Coloring, and MaxCut, we find that adapting the neural heuristic to an LNS procedure yields substantial gains over its vanilla settings and improves its competitiveness with classical and neural baselines. We further observe consistent design patterns across tasks: stochastic destroy operators outperform greedy ones, while greedy repair is more effective than sampling-based repair for finding a single high-quality feasible assignment. These findings highlight LNS as a useful lens and design framework for structuring and improving iterative neural approaches.
Problem

Research questions and friction points this paper is trying to address.

Large Neighborhood Search
Neural Heuristics
Constraint Satisfaction
Iterative Refinement
ConsFormer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Neighborhood Search
Iterative Neural Heuristics
ConsFormer
Destroy-Repair Framework
Neural Constraint Satisfaction
🔎 Similar Papers
No similar papers found.
Y
Yudong W. Xu
Department of Mechanical & Industrial Engineering, University of Toronto
W
Wenhao Li
Department of Mechanical & Industrial Engineering, University of Toronto
Scott Sanner
Scott Sanner
University of Toronto
Artificial IntelligenceMachine LearningInformation Retrieval
Elias B. Khalil
Elias B. Khalil
Assistant Professor, University of Toronto
discrete optimizationmachine learninginteger programming