On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis

📅 2023-01-02
🏛️ Annual Conference Computational Learning Theory
📈 Citations: 7
Influential: 3
📄 PDF
🤖 AI Summary
This work investigates the theoretical hardness and algorithmic efficiency of finding stationary points of the hyperobjective in nonconvex–convex and nonconvex–nonconvex bilevel optimization, under the Polyak–Łojasiewicz (PL) condition—rather than strong convexity—on the lower-level objective. We first establish an impossibility result: for zero-respecting algorithms, computing a hyperstationary point is fundamentally intractable in the nonconvex–convex setting. Under the PL condition, we break the reliance on strong convexity and derive tighter hypergradient convergence complexity bounds. We propose a novel analytical framework unifying implicit function differentiation, hypergradient estimation, and first-order optimization. This yields complexity guarantees of $ ilde{mathcal{O}}(varepsilon^{-2})$, $ ilde{mathcal{O}}(varepsilon^{-4})$, and $ ilde{mathcal{O}}(varepsilon^{-6})$ for deterministic, partially stochastic, and fully stochastic settings, respectively—substantially improving upon existing nonconvex bilevel optimization methods.
📝 Abstract
Bilevel optimization reveals the inner structure of otherwise oblique optimization problems, such as hyperparameter tuning, neural architecture search, and meta-learning. A common goal in bilevel optimization is to minimize a hyper-objective that implicitly depends on the solution set of the lower-level function. Although this hyper-objective approach is widely used, its theoretical properties have not been thoroughly investigated in cases where the lower-level functions lack strong convexity. In this work, we first provide hardness results to show that the goal of finding stationary points of the hyper-objective for nonconvex-convex bilevel optimization can be intractable for zero-respecting algorithms. Then we study a class of tractable nonconvex-nonconvex bilevel problems when the lower-level function satisfies the Polyak-{L}ojasiewicz (PL) condition. We show a simple first-order algorithm can achieve better complexity bounds of $ ilde{mathcal{O}}(epsilon^{-2})$, $ ilde{mathcal{O}}(epsilon^{-4})$ and $ ilde{mathcal{O}}(epsilon^{-6})$ in the deterministic, partially stochastic, and fully stochastic setting respectively.
Problem

Research questions and friction points this paper is trying to address.

Analyzing hyper-objective optimization complexity without strong convexity
Providing hardness results for nonconvex-convex bilevel optimization
Developing efficient algorithms for PL-condition nonconvex-nonconvex problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardness analysis for nonconvex-convex bilevel optimization
Polyak-Łojasiewicz condition enables tractable nonconvex-nonconvex problems
First-order algorithm achieves improved complexity bounds
🔎 Similar Papers
No similar papers found.
L
Le‐Yu Chen
IIIS, Tsinghua University; Shanghai Qizhi Institute
J
Jing Xu
IIIS, Tsinghua University
J
J. Zhang
IIIS, Tsinghua University; Shanghai AI Lab; Shanghai Qizhi Institute