π€ AI Summary
For nonlinear programming problems where constraints are feasible (seeking optimal solutions) or infeasible (minimizing constraint violations sparsely), this paper proposes FlexQPβa primal-feasible quadratic programming optimizer. FlexQP ensures strict primal feasibility via exact constraint relaxation and incorporates a deep unrolling architecture for data-driven acceleration, yielding Deep FlexQP. Its key contributions are: (1) the first unified framework integrating exact constraint relaxation with sparse constraint violation minimization within a feasible-domain optimization paradigm; (2) dimension-agnostic policy learning and warm-start initialization; and (3) PAC-Bayes generalization guarantees. Experiments on portfolio optimization, classification, and regression demonstrate faster convergence, reduced runtime, and substantial improvements over state-of-the-art accelerated QP solvers. Moreover, Deep FlexQP scales effectively to high-dimensional and long-sequence settings.
π Abstract
We propose an always-feasible quadratic programming (QP) optimizer, FlexQP, which is based on an exact relaxation of the QP constraints. If the original constraints are feasible, then the optimizer finds the optimal solution to the original QP. On the other hand, if the constraints are infeasible, the optimizer identifies a solution that minimizes the constraint violation in a sparse manner. FlexQP scales favorably with respect to the problem dimension, is robust to both feasible and infeasible QPs with minimal assumptions on the problem data, and can be effectively warm-started. We subsequently apply deep unfolding to improve our optimizer through data-driven techniques, leading to an accelerated Deep FlexQP. By learning dimension-agnostic feedback policies for the parameters from a small number of training examples, Deep FlexQP generalizes to problems with larger dimensions and can optimize for many more iterations than it was initially trained for. Our approach outperforms two recently proposed state-of-the-art accelerated QP approaches on a suite of benchmark systems including portfolio optimization, classification, and regression problems. We provide guarantees on the expected performance of our deep QP optimizer through probably approximately correct (PAC) Bayes generalization bounds. These certificates are used to design an accelerated sequential quadratic programming solver that solves nonlinear optimal control and predictive safety filter problems faster than traditional approaches. Overall, our approach is very robust and greatly outperforms existing non-learning and learning-based optimizers in terms of both runtime and convergence to the optimal solution across multiple classes of NLPs.