🤖 AI Summary
This work addresses the “feasibility–correctness gap” in large language models (LLMs) generating optimization code—where outputs are syntactically executable but semantically incorrect—by introducing a structured four-stage chain-of-thought reasoning framework and an unsupervised behavioral verification mechanism. The reasoning framework emulates expert modeling practices through phased deliberation, while the verification mechanism detects semantic errors via solver parameter perturbation without requiring ground-truth labels, further enabling execution recovery through Irreducible Inconsistent Subsystem (IIS) diagnosis. Evaluated across five state-of-the-art LLMs and three benchmarks, the approach consistently improves performance, boosting the top model’s correctness rate from 22.6% to 31.1% and achieving a 100% execution success rate, up from 72.1%. The study also releases the RetailOpt-190 dataset to support future research.
📝 Abstract
Large language models (LLMs) can translate natural language into optimization code, but silent failures pose a critical risk: code that executes and returns solver-feasible solutions may encode semantically incorrect formulations, creating a feasibility-correctness gap of up to 90 percentage points on compositional problems. We introduce ReLoop, addressing silent failures from two complementary directions. Structured generation decomposes code production into a four-stage reasoning chain (understand, formalize, synthesize, verify) that mirrors expert modeling practice, with explicit variable-type reasoning and self-verification to prevent formulation errors at their source. Behavioral verification detects errors that survive generation by testing whether the formulation responds correctly to solver-based parameter perturbation, without requiring ground truth -- an external semantic signal that bypasses the self-consistency problem inherent in LLM-based code review. The two mechanisms are complementary: structured generation dominates on complex compositional problems, while behavioral verification becomes the largest single contributor on problems with localized formulation defects. Together with execution recovery via IIS-enhanced diagnostics, ReLoop raises correctness from 22.6% to 31.1% and execution from 72.1% to 100.0% on the strongest model, with consistent gains across five models spanning three paradigms (foundation, SFT, RL) and three benchmarks. We additionally release RetailOpt-190, 190 compositional retail optimization scenarios targeting the multi-constraint interactions where LLMs most frequently fail.