OptiMUS-0.3: Using Large Language Models to Model and Solve Optimization Problems at Scale

📅 2024-07-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Many optimization problems in manufacturing, logistics, and healthcare remain reliant on manual heuristics due to the high modeling barrier for Mixed-Integer Linear Programming (MILP). Method: This paper proposes the first end-to-end MILP automation framework driven by natural language descriptions. It introduces a modular large language model (LLM) architecture integrating natural language understanding, program synthesis, code debugging, solution quality verification, and feedback-driven iterative refinement. Additionally, it establishes NLP4LP—the first long-horizon, complex LP benchmark dataset derived from natural language problem specifications. Contribution/Results: Experiments demonstrate that our framework achieves an accuracy gain of +12.3% over state-of-the-art methods on easy instances and +8.7% on hard instances—including those in NLP4LP—significantly advancing automated modeling and efficient solving of large-scale real-world optimization problems.

Technology Category

Application Category

📝 Abstract
Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the art solvers because the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques. We introduce a Large Language Model (LLM)-based system designed to formulate and solve (mixed integer) linear programming problems from their natural language descriptions. Our system is capable of developing mathematical models, writing and debugging solver code, evaluating the generated solutions, and improving efficiency and correctness of its model and code based on these evaluations. OptiMUS-0.3 utilizes a modular structure to process problems, allowing it to handle problems with long descriptions and complex data without long prompts. Experiments demonstrate that OptiMUS-0.3 outperforms existing state-of-the-art methods on easy datasets by more than 12% and on hard datasets (including a new dataset, NLP4LP, released with this paper that features long and complex problems) by more than 8%.
Problem

Research questions and friction points this paper is trying to address.

Automate optimization problem formulation
Enhance solver code efficiency
Handle complex natural language descriptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based optimization system
Modular problem processing structure
Automates model and code generation
🔎 Similar Papers
No similar papers found.
A
Ali AhmadiTeshnizi
School of Management Science and Engineering, Stanford University, Stanford, California 94305
Wenzhi Gao
Wenzhi Gao
PhD student, Stanford University
OptimizationMathematical programming
H
Herman Brunborg
Institute for Computational and Mathematical Engineering, Stanford University, Stanford, California 94305
Shayan Talaei
Shayan Talaei
Student at Stanford University
Test-time ScalingReasoningText-to-SQLDistributed Optimization
Madeleine Udell
Madeleine Udell
Assistant Professor, Management Science and Engineering, Stanford University
OptimizationMachine LearningData Science