🤖 AI Summary
This work investigates the theoretical hardness and algorithmic efficiency of finding stationary points of the hyperobjective in nonconvex–convex and nonconvex–nonconvex bilevel optimization, under the Polyak–Łojasiewicz (PL) condition—rather than strong convexity—on the lower-level objective. We first establish an impossibility result: for zero-respecting algorithms, computing a hyperstationary point is fundamentally intractable in the nonconvex–convex setting. Under the PL condition, we break the reliance on strong convexity and derive tighter hypergradient convergence complexity bounds. We propose a novel analytical framework unifying implicit function differentiation, hypergradient estimation, and first-order optimization. This yields complexity guarantees of $ ilde{mathcal{O}}(varepsilon^{-2})$, $ ilde{mathcal{O}}(varepsilon^{-4})$, and $ ilde{mathcal{O}}(varepsilon^{-6})$ for deterministic, partially stochastic, and fully stochastic settings, respectively—substantially improving upon existing nonconvex bilevel optimization methods.
📝 Abstract
Bilevel optimization reveals the inner structure of otherwise oblique optimization problems, such as hyperparameter tuning, neural architecture search, and meta-learning. A common goal in bilevel optimization is to minimize a hyper-objective that implicitly depends on the solution set of the lower-level function. Although this hyper-objective approach is widely used, its theoretical properties have not been thoroughly investigated in cases where the lower-level functions lack strong convexity. In this work, we first provide hardness results to show that the goal of finding stationary points of the hyper-objective for nonconvex-convex bilevel optimization can be intractable for zero-respecting algorithms. Then we study a class of tractable nonconvex-nonconvex bilevel problems when the lower-level function satisfies the Polyak-{L}ojasiewicz (PL) condition. We show a simple first-order algorithm can achieve better complexity bounds of $ ilde{mathcal{O}}(epsilon^{-2})$, $ ilde{mathcal{O}}(epsilon^{-4})$ and $ ilde{mathcal{O}}(epsilon^{-6})$ in the deterministic, partially stochastic, and fully stochastic setting respectively.