Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction

πŸ“… 2024-12-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the low computational efficiency of large-scale mixed-integer linear programming (MILP) solvers and the poor scalability of end-to-end learning approaches, this paper proposes a novel paradigm centered on β€œlearning equivalence-preserving model reduction.” We introduce a preference-driven model reduction learning framework: it models relative performance preferences among fully reduced MILP formulations, employs an attention mechanism to capture pairwise preference relations, and incorporates a SetCover-based pruning strategy to efficiently control label space size. Experiments on real-world MILP instances demonstrate that our method improves solution accuracy by nearly 20% over state-of-the-art model reduction techniques and achieves 2–4 orders-of-magnitude speedup compared to the commercial solver Gurobi. The approach thus significantly enhances both accuracy and efficiency in solving large-scale MILP problems.

Technology Category

Application Category

πŸ“ Abstract
By exploiting the correlation between the structure and the solution of Mixed-Integer Linear Programming (MILP), Machine Learning (ML) has become a promising method for solving large-scale MILP problems. Existing ML-based MILP solvers mainly focus on end-to-end solution learning, which suffers from the scalability issue due to the high dimensionality of the solution space. Instead of directly learning the optimal solution, this paper aims to learn a reduced and equivalent model of the original MILP as an intermediate step. The reduced model often corresponds to interpretable operations and is much simpler, enabling us to solve large-scale MILP problems much faster than existing commercial solvers. However, current approaches rely only on the optimal reduced model, overlooking the significant preference information of all reduced models. To address this issue, this paper proposes a preference-based model reduction learning method, which considers the relative performance (i.e., objective cost and constraint feasibility) of all reduced models on each MILP instance as preferences. We also introduce an attention mechanism to capture and represent preference information, which helps improve the performance of model reduction learning tasks. Moreover, we propose a SetCover based pruning method to control the number of reduced models (i.e., labels), thereby simplifying the learning process. Evaluation on real-world MILP problems shows that 1) compared to the state-of-the-art model reduction ML methods, our method obtains nearly 20% improvement on solution accuracy, and 2) compared to the commercial solver Gurobi, two to four orders of magnitude speedups are achieved.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Mixed Integer Linear Programming
Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

MILP Simplification
Preference Analysis
Enhanced Solving Efficiency
Y
Yixuan Li
School of Computer Science and Engineering, Southeast University
C
Can Chen
School of Computer Science and Engineering, Southeast University
J
Jiajun Li
School of Computer Science and Engineering, Southeast University
Jiahui Duan
Jiahui Duan
University of Notre Dame
Xiongwei Han
Xiongwei Han
AI&OR Principal Researcher at Noah's Ark Lab, Huawei
Intelligence ModelingLLMs for OR
T
Tao Zhong
Noah’s Ark Lab, Huawei Technologies
Vincent Chau
Vincent Chau
Southeast University
Weiwei Wu
Weiwei Wu
Computer Science, Southeast University
Wanyuan Wang
Wanyuan Wang
Southeast University
Artificial IntelligenceMultiagent SystemsGame Theory