🤖 AI Summary
To address the computational bottleneck of inefficient Hessian-vector products in hypergradient computation for bilevel optimization, this paper introduces, for the first time, Lanczos iteration and Krylov subspace projection into this domain. We propose a novel method that dynamically constructs a low-dimensional subspace to efficiently approximate hypergradients. Our approach reduces the original high-dimensional Hessian system inversion to solving a small-scale tridiagonal linear system, achieving a theoretical convergence rate of O(ε⁻¹) and establishing a provably convergent optimization framework. Experiments on synthetic data and two deep learning tasks—hyperparameter optimization and meta-learning—demonstrate that our method accelerates hypergradient computation by 2–5× over baseline approaches while significantly reducing memory overhead. The proposed framework thus delivers superior efficiency, numerical stability, and practical applicability.
📝 Abstract
Bilevel optimization, with broad applications in machine learning, has an intricate hierarchical structure. Gradient-based methods have emerged as a common approach to large-scale bilevel problems. However, the computation of the hyper-gradient, which involves a Hessian inverse vector product, confines the efficiency and is regarded as a bottleneck. To circumvent the inverse, we construct a sequence of low-dimensional approximate Krylov subspaces with the aid of the Lanczos process. As a result, the constructed subspace is able to dynamically and incrementally approximate the Hessian inverse vector product with less effort and thus leads to a favorable estimate of the hyper-gradient. Moreover, we propose a provable subspace-based framework for bilevel problems where one central step is to solve a small-size tridiagonal linear system. To the best of our knowledge, this is the first time that subspace techniques are incorporated into bilevel optimization. This successful trial not only enjoys $mathcal{O}(epsilon^{-1})$ convergence rate but also demonstrates efficiency in a synthetic problem and two deep learning tasks.