🤖 AI Summary
This work addresses the lack of evaluation benchmarks aligned with real-world industrial-scale optimization models for natural language to optimization modeling approaches, which hinders reliable performance assessment on large-scale practical problems. To bridge this gap, the authors propose a structure-aware inverse construction method that recovers compact model structures from real mixed-integer linear programming (MILP) instances in MIPLIB 2017 and generates semantically precise natural language descriptions. This yields MIPLIB-NL, the first industrial-grade benchmark supporting model-data separation, comprising 223 one-to-one reconstructed instances. Through expert review and human-in-the-loop iterative validation, the benchmark reveals significant performance degradation of current systems on authentic industrial-scale problems and uncovers failure modes invisible to toy-scale benchmarks, thereby establishing a reliable standard for evaluating automated optimization modeling.
📝 Abstract
Optimization modeling underpins decision-making in logistics, manufacturing, energy, and finance, yet translating natural-language requirements into correct optimization formulations and solver-executable code remains labor-intensive. Although large language models (LLMs) have been explored for this task, evaluation is still dominated by toy-sized or synthetic benchmarks, masking the difficulty of industrial problems with $10^{3}$--$10^{6}$ (or more) variables and constraints. A key bottleneck is the lack of benchmarks that align natural-language specifications with reference formulations/solver code grounded in real optimization models. To fill in this gap, we introduce MIPLIB-NL, built via a structure-aware reverse construction methodology from real mixed-integer linear programs in MIPLIB~2017. Our pipeline (i) recovers compact, reusable model structure from flat solver formulations, (ii) reverse-generates natural-language specifications explicitly tied to this recovered structure under a unified model--data separation format, and (iii) performs iterative semantic validation through expert review and human--LLM interaction with independent reconstruction checks. This yields 223 one-to-one reconstructions that preserve the mathematical content of the original instances while enabling realistic natural-language-to-optimization evaluation. Experiments show substantial performance degradation on MIPLIB-NL for systems that perform strongly on existing benchmarks, exposing failure modes invisible at toy scale.