Learning with Local Search MCMC Layers

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural combinatorial optimization suffers from theoretical guarantees and differentiability deficits when integrating imprecise local search solvers for NP-hard problems. Method: We propose a differentiable combinatorial optimization layer that models the neighborhood structure of local search as an MCMC proposal distribution, unifying simulated annealing and Metropolis–Hastings mechanisms to establish the first implicitly differentiable framework with provable convergence guarantees. Our approach integrates probabilistic modeling over combinatorial spaces, neighborhood-driven sampling, and end-to-end joint training. Contribution/Results: The method maintains solution quality while significantly reducing computational overhead. Extensive evaluation on large-scale dynamic vehicle routing problems with time windows demonstrates its effectiveness, robustness, and scalability. This work establishes a new paradigm for neural combinatorial optimization—rigorous in theory and practical in deployment.

Technology Category

Application Category

📝 Abstract
Integrating combinatorial optimization layers into neural networks has recently attracted significant research interest. However, many existing approaches lack theoretical guarantees or fail to perform adequately when relying on inexact solvers. This is a critical limitation, as many operations research problems are NP-hard, often necessitating the use of neighborhood-based local search heuristics. These heuristics iteratively generate and evaluate candidate solutions based on an acceptance rule. In this paper, we introduce a theoretically-principled approach for learning with such inexact combinatorial solvers. Inspired by the connection between simulated annealing and Metropolis-Hastings, we propose to transform problem-specific neighborhood systems used in local search heuristics into proposal distributions, implementing MCMC on the combinatorial space of feasible solutions. This allows us to construct differentiable combinatorial layers and associated loss functions. Replacing an exact solver by a local search strongly reduces the computational burden of learning on many applications. We demonstrate our approach on a large-scale dynamic vehicle routing problem with time windows.
Problem

Research questions and friction points this paper is trying to address.

Integrating inexact combinatorial solvers into neural networks
Theoretical guarantees for learning with local search heuristics
Reducing computational burden in large-scale optimization problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transforms local search heuristics into MCMC proposals
Constructs differentiable combinatorial layers and loss functions
Reduces computational burden via local search MCMC layers
🔎 Similar Papers
No similar papers found.