🤖 AI Summary
Existing LLM-based optimization solvers rely on iterative repair, suffering from low reliability and high latency. This paper proposes a one-shot, batch-generation framework augmented with statistical validation for automated solver synthesis. First, a large language model parallelly generates diverse solver components—including modeling, solving, and post-processing modules. Then, a lightweight statistical model quantifies both the performance and epistemic uncertainty of each component, enabling reliable, non-iterative integration and optimal selection. The core innovation lies in the tight integration of generative AI and statistical inference, replacing opaque retry mechanisms with interpretable uncertainty quantification. Evaluated across multiple complex optimization tasks, the framework increases the optimal-solution rate from a baseline of 5% to 92%, while reducing end-to-end latency by an order of magnitude.
📝 Abstract
LLM-based solvers have emerged as a promising means of automating problem modeling and solving. However, they remain unreliable and often depend on iterative repair loops that result in significant latency. We introduce OptiHive, an LLM-based framework that produces high-quality solvers for optimization problems from natural-language descriptions without iterative self-correction. OptiHive uses a single batched LLM query to generate diverse components (solvers, problem instances, and validation tests) and filters out erroneous components to ensure fully interpretable outputs. Taking into account the imperfection of the generated components, we employ a statistical model to infer their true performance, enabling principled uncertainty quantification and solver selection. On tasks ranging from traditional optimization problems to challenging variants of the Multi-Depot Vehicle Routing Problem, OptiHive significantly outperforms baselines, increasing the optimality rate from 5% to 92% on the most complex problems.