🤖 AI Summary
This work addresses the limitations of existing automated optimization modeling approaches, which either rely on high-latency closed-source large language models or require costly process-level supervision for fine-tuning, hindering generalization across solvers. To overcome these challenges, we propose EVOM, a novel framework that, for the first time, constructs a closed-loop reinforcement learning system using only scalar rewards derived from solver execution outcomes—eliminating the need for process supervision. EVOM integrates GRPO and DAPO algorithms, a sandboxed execution environment, and a solver interaction verification mechanism, enabling zero-shot cross-solver transfer and low-cost adaptation. Experimental results demonstrate that EVOM matches or surpasses supervised fine-tuning performance across multiple benchmark datasets and solvers, significantly enhancing both modeling efficiency and generalization capability.
📝 Abstract
Automating optimization modeling with LLMs is a promising path toward scalable decision intelligence, but existing approaches either rely on agentic pipelines built on closed-source LLMs with high inference latency, or fine-tune smaller LLMs using costly process supervision that often overfits to a single solver API. Inspired by reinforcement learning with verifiable rewards, we propose Execution-Verified Optimization Modeling (EVOM), an execution-verified learning framework that treats a mathematical programming solver as a deterministic, interactive verifier. Given a natural-language problem and a target solver, EVOM generates solver-specific code, executes it in a sandboxed harness, and converts execution outcomes into scalar rewards, optimized with GRPO and DAPO in a closed-loop generate-execute-feedback-update process. This outcome-only formulation removes the need for process-level supervision, and enables cross-solver generalization by switching the verification environment rather than reconstructing solver-specific datasets. Experiments on NL4OPT, MAMO, IndustryOR, and OptiBench across Gurobi, OR-Tools, and COPT show that EVOM matches or outperforms process-supervised SFT, supports zero-shot solver transfer, and achieves effective low-cost solver adaptation by continuing training under the target solver backend.