Recurrent State Encoders for Efficient Neural Combinatorial Optimization

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural combinatorial optimization (NCO) suffers from high inference latency and poor reusability of historical computations in constructive approaches. Method: This paper proposes a Recurrent State Encoder (RSE), the first to incorporate recurrent structure into NCO state encoding. RSE leverages embeddings from prior decoding steps to enable cross-step computation sharing, reducing network depth while preserving or improving solution quality. Integrated with an autoregressive decoder within a large neighborhood search framework, RSE is evaluated on TSP, CVRP, and OP. Contribution/Results: On multiple benchmarks, RSE matches or outperforms non-recurrent baselines in solution quality while reducing average inference latency by 32%–47%. This significantly enhances deployment efficiency and scalability of NCO models in practical search algorithms.

Technology Category

Application Category

📝 Abstract
The primary paradigm in Neural Combinatorial Optimization (NCO) are construction methods, where a neural network is trained to sequentially add one solution component at a time until a complete solution is constructed. We observe that the typical changes to the state between two steps are small, since usually only the node that gets added to the solution is removed from the state. An efficient model should be able to reuse computation done in prior steps. To that end, we propose to train a recurrent encoder that computes the state embeddings not only based on the state but also the embeddings of the step before. We show that the recurrent encoder can achieve equivalent or better performance than a non-recurrent encoder even if it consists of $3 imes$ fewer layers, thus significantly improving on latency. We demonstrate our findings on three different problems: the Traveling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP), and the Orienteering Problem (OP) and integrate the models into a large neighborhood search algorithm, to showcase the practical relevance of our findings.
Problem

Research questions and friction points this paper is trying to address.

Improving efficiency of neural combinatorial optimization methods
Reducing computational redundancy in sequential solution construction
Enhancing state embedding reuse with recurrent encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recurrent encoder reuses prior computation for efficiency
Fewer layers achieve equivalent or better performance
Integrates with large neighborhood search algorithm
🔎 Similar Papers
No similar papers found.
T
Tim Dernedde
Information Systems and Machine Learning Lab (ISMLL), Institute of Computer Science, University of Hildesheim
D
Daniela Thyssens
Information Systems and Machine Learning Lab (ISMLL), Institute of Computer Science, University of Hildesheim
Lars Schmidt-Thieme
Lars Schmidt-Thieme
University of Hildesheim, Germany
machine learning