Dual Interior-Point Optimization Learning

๐Ÿ“… 2024-02-04
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the slow convergence of conventional solvers for large-scale constrained optimization and their inability to meet real-time requirements, this paper proposes a novel machine learning surrogate method that learns dual feasible solutionsโ€”thereby overcoming the lack of theoretical guarantees inherent in existing primal-only surrogates. Our approach tightly integrates linear programming structure, convex optimization theory, and dual modeling. Key contributions include: (1) the first smooth, self-supervised dual loss function enabling end-to-end differentiable training; and (2) an implicit-layer-free analytical dual completion strategy that rigorously ensures dual feasibility and enables millisecond-scale inference. Evaluated on large-scale linear optimization benchmarks, our method achieves <1% optimality gap while accelerating solution times by several orders of magnitude over commercial solvers, significantly outperforming unstructured surrogate models.

Technology Category

Application Category

๐Ÿ“ Abstract
In many practical applications of constrained optimization, scale and solving time limits make traditional optimization solvers prohibitively slow. Thus, the research question of how to design optimization proxies -- machine learning models that produce high-quality solutions -- has recently received significant attention. Orthogonal to this research thread which focuses on learning primal solutions, this paper studies how to learn dual feasible solutions that complement primal approaches and provide quality guarantees. The paper makes two distinct contributions. First, to train dual linear optimization proxies, the paper proposes a smoothed self-supervised loss function that augments the objective function with a dual penalty term. Second, the paper proposes a novel dual completion strategy that guarantees dual feasibility by solving a convex optimization problem. Moreover, the paper derives closed-form solutions to this completion optimization for several classes of dual penalties, eliminating the need for computationally-heavy implicit layers. Numerical results are presented on large linear optimization problems and demonstrate the effectiveness of the proposed approach. The proposed dual completion outperforms methods for learning optimization proxies which do not exploit the structure of the dual problem. Compared to commercial optimization solvers, the learned dual proxies achieve optimality gaps below $1%$ and several orders of magnitude speedups.
Problem

Research questions and friction points this paper is trying to address.

Designing dual feasible optimization proxies
Learning dual solutions with quality guarantees
Improving speed and optimality in constrained optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Smoothed self-supervised loss function
Novel dual completion strategy
Closed-form solutions for dual penalties
๐Ÿ”Ž Similar Papers
No similar papers found.
Michael Klamkin
Michael Klamkin
Georgia Institute of Technology, AI4OPT
machine learningconstrained optimization
Mathieu Tanneau
Mathieu Tanneau
Georgia Institute of Technology
P
P. V. Hentenryck
NSF AI Institute for Advances in Optimization, Georgia Institute of Technology, Atlanta, USA