Differentiation Through Black-Box Quadratic Programming Solvers

πŸ“… 2024-10-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing differentiable quadratic programming (QP) methods rely on solver-specific implementations, hindering seamless integration into neural networks or bilevel optimization pipelines and restricting solver choice. This paper introduces dQPβ€”the first explicit differentiation framework based on the active set, enabling end-to-end differentiability for arbitrary black-box QP solvers (compatible with 15+ mainstream solvers) without modifying solver source code; only the optimal solution and active constraint set are required for backward propagation. dQP unifies convex optimization theory, implicit function differentiation, and automatic differentiation, supporting both dense small-scale and sparse large-scale QP problems. Experiments demonstrate that dQP consistently outperforms prior differentiable QP methods across diverse benchmarks. Moreover, it enables a novel bilevel geometric optimization task, showcasing broad applicability beyond standard QP settings.

Technology Category

Application Category

πŸ“ Abstract
In recent years, many deep learning approaches have incorporated layers that solve optimization problems (e.g., linear, quadratic, and semidefinite programs). Integrating these optimization problems as differentiable layers requires computing the derivatives of the optimization problem's solution with respect to its objective and constraints. This has so far prevented the use of state-of-the-art black-box numerical solvers within neural networks, as they lack a differentiable interface. To address this issue for one of the most common convex optimization problems -- quadratic programming (QP) -- we introduce dQP, a modular framework that enables plug-and-play differentiation for any QP solver, allowing seamless integration into neural networks and bi-level optimization tasks. Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation. This insight reveals a unique relationship between the computation of the solution and its derivative, enabling efficient differentiation of any solver, that only requires the primal solution. Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers, providing each with a fully differentiable backbone for immediate use as a differentiable layer in learning setups. To demonstrate the scalability and effectiveness of dQP, we evaluate it on a large benchmark dataset of QPs with varying structures. We compare dQP with existing differentiable QP methods, demonstrating its advantages across a range of problems, from challenging small and dense problems to large-scale sparse ones, including a novel bi-level geometry optimization problem.
Problem

Research questions and friction points this paper is trying to address.

Differentiating QP solutions without solver limitations
Enabling plug-and-play QP solver differentiation
Improving scalability for large sparse QP problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework for QP solver differentiation
Decouples solution and differentiation via active set
Integrates with 15+ solvers, minimal overhead
πŸ”Ž Similar Papers
No similar papers found.
C
Connor W. Magoon
University of North Carolina at Chapel Hill
F
Fengyu Yang
University of North Carolina at Chapel Hill
Noam Aigerman
Noam Aigerman
Associate Professor at University of Montreal
Computer GraphicsGeometry ProcessingDeep LearningOptimization
S
Shahar Z. Kovalsky
University of North Carolina at Chapel Hill