🤖 AI Summary
This work addresses the performance limitations of hypergradient computation in bilevel optimization, which often arise from neglecting curvature information. By leveraging the implicit function theorem, the study introduces the first efficient integration of Kronecker-factored Approximate Curvature (KFAC) into hypergradient estimation, approximating inverse Hessian-vector products to retain essential curvature while maintaining low computational and memory overhead. The proposed method significantly outperforms existing approaches—including conjugate gradient, Neumann series, and gradient unrolling—across tasks such as meta-learning and AI safety, and scales effectively to large models like BERT.
📝 Abstract
Bilevel optimization (BO) is widely applicable to many machine learning problems. Scaling BO, however, requires repeatedly computing hypergradients, which involves solving inverse Hessian-vector products (IHVPs). In practice, these operations are often approximated using crude surrogates such as one-step gradient unrolling or identity/short Neumann expansions, which discard curvature information. We build on implicit function theorem-based algorithms and propose to incorporate Kronecker-factored approximate curvature (KFAC), yielding curvature-aware hypergradients with a better performance efficiency trade-off than Conjugate Gradient (CG) or Neumann methods and consistently outperforming unrolling. We evaluate this approach across diverse tasks, including meta-learning and AI safety problems. On models up to BERT, we show that curvature information is valuable at scale, and KFAC can provide it with only modest memory and runtime overhead. Our implementation is available at https://github.com/liaodisen/NeuralBo.