π€ AI Summary
To address the low computational efficiency of large-scale mixed-integer linear programming (MILP) solvers and the poor scalability of end-to-end learning approaches, this paper proposes a novel paradigm centered on βlearning equivalence-preserving model reduction.β We introduce a preference-driven model reduction learning framework: it models relative performance preferences among fully reduced MILP formulations, employs an attention mechanism to capture pairwise preference relations, and incorporates a SetCover-based pruning strategy to efficiently control label space size. Experiments on real-world MILP instances demonstrate that our method improves solution accuracy by nearly 20% over state-of-the-art model reduction techniques and achieves 2β4 orders-of-magnitude speedup compared to the commercial solver Gurobi. The approach thus significantly enhances both accuracy and efficiency in solving large-scale MILP problems.
π Abstract
By exploiting the correlation between the structure and the solution of Mixed-Integer Linear Programming (MILP), Machine Learning (ML) has become a promising method for solving large-scale MILP problems. Existing ML-based MILP solvers mainly focus on end-to-end solution learning, which suffers from the scalability issue due to the high dimensionality of the solution space. Instead of directly learning the optimal solution, this paper aims to learn a reduced and equivalent model of the original MILP as an intermediate step. The reduced model often corresponds to interpretable operations and is much simpler, enabling us to solve large-scale MILP problems much faster than existing commercial solvers. However, current approaches rely only on the optimal reduced model, overlooking the significant preference information of all reduced models. To address this issue, this paper proposes a preference-based model reduction learning method, which considers the relative performance (i.e., objective cost and constraint feasibility) of all reduced models on each MILP instance as preferences. We also introduce an attention mechanism to capture and represent preference information, which helps improve the performance of model reduction learning tasks. Moreover, we propose a SetCover based pruning method to control the number of reduced models (i.e., labels), thereby simplifying the learning process. Evaluation on real-world MILP problems shows that 1) compared to the state-of-the-art model reduction ML methods, our method obtains nearly 20% improvement on solution accuracy, and 2) compared to the commercial solver Gurobi, two to four orders of magnitude speedups are achieved.