Multi-Action Self-Improvement for Neural Combinatorial Optimization

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural combinatorial optimization (NCO) self-improvement methods suffer from low sample efficiency—requiring extensive sampling to generate a single expert trajectory—and neglect multi-agent permutation symmetry (e.g., vehicle interchangeability in the Vehicle Routing Problem), hindering generalization and cooperative learning. This work proposes the first self-improvement framework for NCO operating directly in the joint action space. It introduces a set-prediction-based loss function that explicitly enforces agent-permutation invariance, and adopts a proxy-task assignment architecture to generate multi-agent actions in parallel within a single step. Evaluated on multiple standard combinatorial optimization benchmarks, our method achieves superior solution quality, reduces inference latency by 30–50%, improves training efficiency by 2.1×, and—critically—unifies high sample efficiency with strong cooperative capability for the first time.

Technology Category

Application Category

📝 Abstract
Self-improvement has emerged as a state-of-the-art paradigm in Neural Combinatorial Optimization (NCO), where models iteratively refine their policies by generating and imitating high-quality solutions. Despite strong empirical performance, existing methods face key limitations. Training is computationally expensive, as policy updates require sampling numerous candidate solutions per instance to extract a single expert trajectory. More fundamentally, these approaches fail to exploit the structure of combinatorial problems involving the coordination of multiple agents, such as vehicles in min-max routing or machines in scheduling. By supervising on single-action trajectories, they fail to exploit agent-permutation symmetries, where distinct sequences of actions yield identical solutions, hindering generalization and the ability to learn coordinated behavior. We address these challenges by extending self-improvement to operate over joint multi-agent actions. Our model architecture predicts complete agent-task assignments jointly at each decision step. To explicitly leverage symmetries, we employ a set-prediction loss, which supervises the policy on multiple expert assignments for any given state. This approach enhances sample efficiency and the model's ability to learn coordinated behavior. Furthermore, by generating multi-agent actions in parallel, it drastically accelerates the solution generation phase of the self-improvement loop. Empirically, we validate our method on several combinatorial problems, demonstrating consistent improvements in the quality of the final solution and a reduced generation latency compared to standard self-improvement.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of policy updates in neural combinatorial optimization
Exploiting multi-agent coordination in combinatorial problem structures
Addressing agent-permutation symmetries to improve generalization capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint multi-agent action prediction architecture
Set-prediction loss leveraging agent-permutation symmetries
Parallel multi-action generation accelerating solution process
🔎 Similar Papers
No similar papers found.
Laurin Luttmann
Laurin Luttmann
Researcher, Leuphana University Lüneburg
Machine Learning
L
Lin Xie
Brandenburg University of Technology, Germany