DRAGON: LLM-Driven Decomposition and Reconstruction Agents for Large-Scale Combinatorial Optimization

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes DRAGON, a novel framework addressing the limited scalability, weak generalization, and performance degradation of large language models (LLMs) in large-scale combinatorial optimization. DRAGON introduces, for the first time, a feedback-driven language agent that automatically decomposes complex problems into local subproblems, leverages LLMs for targeted solving, and iteratively refines solutions through a synergistic integration of metaheuristics and symbolic reasoning. Its core innovations include adaptive experiential memory, an interpretable collaborative reasoning mechanism, and a closed-loop optimization process that alternates between decomposition and reconstruction. Experimental results demonstrate that DRAGON consistently produces feasible solutions on standard benchmarks such as TSPLIB, CVRPLIB, and Weibull-5k, and achieves a near-optimal gap of 0.16% on a knapsack problem with over three million variables—significantly outperforming existing LLM-based approaches.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently shown promise in addressing combinatorial optimization problems (COPs) through prompt-based strategies. However, their scalability and generalization remain limited, and their effectiveness diminishes as problem size increases, particularly in routing problems involving more than 30 nodes. We propose DRAGON, which stands for Decomposition and Reconstruction Agents Guided OptimizatioN, a novel framework that combines the strengths of metaheuristic design and LLM reasoning. Starting from an initial global solution, DRAGON autonomously identifies regions with high optimization potential and strategically decompose large-scale COPs into manageable subproblems. Each subproblem is then reformulated as a concise, localized optimization task and solved through targeted LLM prompting guided by accumulated experiences. Finally, the locally optimized solutions are systematically reintegrated into the original global context to yield a significantly improved overall outcome. By continuously interacting with the optimization environment and leveraging an adaptive experience memory, the agents iteratively learn from feedback, effectively coupling symbolic reasoning with heuristic search. Empirical results show that, unlike existing LLM-based solvers limited to small-scale instances, DRAGON consistently produces feasible solutions on TSPLIB, CVRPLIB, and Weibull-5k bin packing benchmarks, and achieves near-optimal results (0.16% gap) on knapsack problems with over 3M variables. This work shows the potential of feedback-driven language agents as a new paradigm for generalizable and interpretable large-scale optimization.
Problem

Research questions and friction points this paper is trying to address.

combinatorial optimization
large-scale optimization
LLM scalability
generalization
routing problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposition and Reconstruction
LLM-driven Optimization
Combinatorial Optimization
Adaptive Experience Memory
Feedback-driven Agents
🔎 Similar Papers
No similar papers found.