🤖 AI Summary
This work proposes Natural Hypergradient Descent (NHGD) to address the high computational cost of hypergradient computation in bilevel optimization, which stems from the need to invert the Hessian matrix. NHGD leverages the statistical structure of the inner-level problem by replacing the Hessian with the empirical Fisher information matrix and introduces a parallelized approximation framework to efficiently update its inverse concurrently with optimization. Theoretical analysis demonstrates that NHGD achieves comparable sample complexity and high-probability error bounds to existing methods while substantially reducing computational overhead. Empirical results confirm its superior scalability and practical performance on large-scale bilevel learning tasks.
📝 Abstract
In this work, we propose Natural Hypergradient Descent (NHGD), a new method for solving bilevel optimization problems. To address the computational bottleneck in hypergradient estimation--namely, the need to compute or approximate Hessian inverse--we exploit the statistical structure of the inner optimization problem and use the empirical Fisher information matrix as an asymptotically consistent surrogate for the Hessian. This design enables a parallel optimize-and-approximate framework in which the Hessian-inverse approximation is updated synchronously with the stochastic inner optimization, reusing gradient information at negligible additional cost. Our main theoretical contribution establishes high-probability error bounds and sample complexity guarantees for NHGD that match those of state-of-the-art optimize-then-approximate methods, while significantly reducing computational time overhead. Empirical evaluations on representative bilevel learning tasks further demonstrate the practical advantages of NHGD, highlighting its scalability and effectiveness in large-scale machine learning settings.