Execution-Verified Reinforcement Learning for Optimization Modeling

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing automated optimization modeling approaches, which either rely on high-latency closed-source large language models or require costly process-level supervision for fine-tuning, hindering generalization across solvers. To overcome these challenges, we propose EVOM, a novel framework that, for the first time, constructs a closed-loop reinforcement learning system using only scalar rewards derived from solver execution outcomes—eliminating the need for process supervision. EVOM integrates GRPO and DAPO algorithms, a sandboxed execution environment, and a solver interaction verification mechanism, enabling zero-shot cross-solver transfer and low-cost adaptation. Experimental results demonstrate that EVOM matches or surpasses supervised fine-tuning performance across multiple benchmark datasets and solvers, significantly enhancing both modeling efficiency and generalization capability.
📝 Abstract
Automating optimization modeling with LLMs is a promising path toward scalable decision intelligence, but existing approaches either rely on agentic pipelines built on closed-source LLMs with high inference latency, or fine-tune smaller LLMs using costly process supervision that often overfits to a single solver API. Inspired by reinforcement learning with verifiable rewards, we propose Execution-Verified Optimization Modeling (EVOM), an execution-verified learning framework that treats a mathematical programming solver as a deterministic, interactive verifier. Given a natural-language problem and a target solver, EVOM generates solver-specific code, executes it in a sandboxed harness, and converts execution outcomes into scalar rewards, optimized with GRPO and DAPO in a closed-loop generate-execute-feedback-update process. This outcome-only formulation removes the need for process-level supervision, and enables cross-solver generalization by switching the verification environment rather than reconstructing solver-specific datasets. Experiments on NL4OPT, MAMO, IndustryOR, and OptiBench across Gurobi, OR-Tools, and COPT show that EVOM matches or outperforms process-supervised SFT, supports zero-shot solver transfer, and achieves effective low-cost solver adaptation by continuing training under the target solver backend.
Problem

Research questions and friction points this paper is trying to address.

optimization modeling
large language models
solver generalization
process supervision
decision intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Execution-Verified Reinforcement Learning
Optimization Modeling
Cross-Solver Generalization
Process-Free Supervision
LLM-based Decision Intelligence
🔎 Similar Papers
No similar papers found.
R
Runda Guan
School of Computer Science and Engineering, Nanjing University of Science and Technology
X
Xiangqing Shen
School of Intelligence Science and Technology, Nanjing University
Jiajun Zhang
Jiajun Zhang
Institute of Automation Chinese Academy of Sciences
Natural Language ProcessingLarge Language ModelsMultimodal Information Processing
Y
Yifan Zhang
Institute of Automation, Chinese Academy of Sciences
J
Jian Cheng
Institute of Automation, Chinese Academy of Sciences
Rui Xia
Rui Xia
Nanjing University of Science and Technology
Natural Language ProcessingText MiningSentiment AnalysisAffective Computing