Solver-Informed RL: Grounding Large Language Models for Authentic Optimization Modeling

๐Ÿ“… 2025-05-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from hallucination when automatically translating natural language to optimization models, leading to syntactically invalid, infeasible, or non-executable formulations. Method: We propose a solver-driven reinforcement learning framework that integrates classical optimization solvers (e.g., Gurobi, CPLEX) as external oracles to provide verifiable rewards for syntactic correctness, feasibility, and solution quality. We further introduce an instance-augmented self-consistency data synthesis method to jointly enhance factual accuracy and executability. Technically, the framework unifies proximal policy optimization (PPO), linear programming parsing, supervised fine-tuning, and reward modeling. Contribution/Results: Our approach achieves state-of-the-art performance across multiple public benchmarks, significantly improving correctness rate, runtime success rate, and optimal-solution matching rateโ€”enabling end-to-end deployment of NL-to-optimization translation systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Optimization modeling is fundamental to decision-making across diverse domains.Despite progress in automating optimization formulation from natural language descriptions, Large Language Models (LLMs) often struggle to generate formally correct and usable models due to hallucinations, posing a challenge for reliable automation. Inspired by the success of Reinforcement Learning (RL) in enhancing Large Reasoning Models, we present Solver-Informed Reinforcement Learning (SIRL).This novel framework leverages external optimization solvers as verifiable reward mechanisms to significantly improve the authenticity of LLMs for optimization modeling.Acting as precise verifiers, these solvers automatically assess the executable code and the instance-level mathematical model represented by the associated LP file, yielding precise and comprehensive feedback signals -- including syntax, feasibility, and solution quality that directly inform the RL process. This automated verification process, powered by classic optimization solvers, also underpins our instance-enhanced self-consistency method to synthesize high-quality training data. Extensive experiments on diverse public benchmarks demonstrate that SIRL achieves state-of-the-art performance, substantially outperforming existing methods in generating accurate and executable optimization models.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM-generated optimization models' formal correctness and usability
Reducing hallucinations in automated optimization formulation via RL
Enhancing authenticity of optimization models using solver-verified rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

SIRL uses solver-verified rewards for LLMs
Automated feedback on syntax and feasibility
Instance-enhanced self-consistency improves training data