SolverLLM: Leveraging Test-Time Scaling for Optimization Problem via LLM-Guided Search

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing optimization solvers suffer from poor generalization—relying heavily on prompt engineering—or high training costs—requiring supervised fine-tuning. This paper introduces SolverLLM, a training-free, large language model (LLM)-guided search framework for general-purpose optimization. Methodologically, it integrates three key components: (1) a test-time dynamic expansion mechanism that transforms natural-language problems into formal mathematical models and executable code; (2) an enhanced Monte Carlo Tree Search (MCTS) incorporating dynamic action expansion, prompt backpropagation, and uncertainty backpropagation to improve search efficiency and decision robustness; and (3) a feedback-driven prompt optimization and reward reliability assessment module. Evaluated on six heterogeneous benchmark datasets, SolverLLM consistently outperforms both prompt-engineering-based and supervised-learning baselines, demonstrating strong cross-task generalization and zero training overhead.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) offer promising capabilities for tackling complex reasoning tasks, including optimization problems. However, existing methods either rely on prompt engineering, which leads to poor generalization across problem types, or require costly supervised training. We introduce SolverLLM, a training-free framework that leverages test-time scaling to solve diverse optimization problems. Rather than solving directly, SolverLLM generates mathematical formulations and translates them into solver-ready code, guided by a novel Monte Carlo Tree Search (MCTS) strategy. To enhance the search process, we modify classical MCTS with (1) dynamic expansion for adaptive formulation generation, (2) prompt backpropagation to guide exploration via outcome-driven feedback, and (3) uncertainty backpropagation to incorporate reward reliability into decision-making. Experiments on six standard benchmark datasets demonstrate that SolverLLM outperforms both prompt-based and learning-based baselines, achieving strong generalization without additional training.
Problem

Research questions and friction points this paper is trying to address.

SolverLLM solves diverse optimization problems without training
It generates mathematical formulations and solver-ready code
It enhances search via modified Monte Carlo Tree Search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework using test-time scaling
Generates mathematical formulations into solver code
Modified MCTS with dynamic expansion and backpropagation
🔎 Similar Papers
No similar papers found.