🤖 AI Summary
Addressing the challenges of difficult hyperparameter tuning and slow convergence in convex quadratic programming (QP) solvers, this paper introduces, for the first time, a reinforcement learning–based approach to the stabilized interior-point method (IPM). We propose a novel double-loop online adaptive hyperparameter control framework: an outer loop employs a policy network to dynamically adjust key IPM parameters—including damping factors and step-size scaling—while the inner loop performs standard numerical optimization iterations. The method requires no problem-specific prior knowledge and exhibits strong generalization across diverse QP problem classes and dimensions. Empirical evaluation demonstrates that, under lightweight training, the learned policy significantly reduces time-to-high-accuracy solutions, achieving average speedups of 1.8–3.2× over baseline solvers on QP instances of varying scales. Moreover, it outperforms conventional heuristics and Bayesian optimization in robustness and adaptability.
📝 Abstract
Quadratic programming is a workhorse of modern nonlinear optimization, control, and data science. Although regularized methods offer convergence guarantees under minimal assumptions on the problem data, they can exhibit the slow tail-convergence typical of first-order schemes, thus requiring many iterations to achieve high-accuracy solutions. Moreover, hyperparameter tuning significantly impacts on the solver performance but how to find an appropriate parameter configuration remains an elusive research question. To address these issues, we explore how data-driven approaches can accelerate the solution process. Aiming at high-accuracy solutions, we focus on a stabilized interior-point solver and carefully handle its two-loop flow and control parameters. We will show that reinforcement learning can make a significant contribution to facilitating the solver tuning and to speeding up the optimization process. Numerical experiments demonstrate that, after a lightweight training, the learned policy generalizes well to different problem classes with varying dimensions and to various solver configurations.