🤖 AI Summary
Conventional window-smoothed regret analysis fails in online bilevel optimization (OBO) due to dynamic shifts in both upper- and lower-level objectives. Method: We propose the first window-free stochastic bilevel regret framework, featuring a novel unbiased search direction and synchronized variable updates for joint upper- and lower-level optimization. Our approach integrates zeroth- and first-order stochastic optimization techniques, leveraging linear system solvers and zeroth-order surrogates to estimate higher-order information—including Hessians, Jacobians, and gradients—enabling applicability to zeroth-order black-box settings. Contribution/Results: We establish the first sublinear stochastic bilevel regret bound, significantly reducing query complexity for hypergradient estimation. Empirical evaluation on online parametric loss tuning and black-box adversarial attacks demonstrates superior dynamic responsiveness, optimization efficiency, and stability over existing methods.
📝 Abstract
Online bilevel optimization (OBO) is a powerful framework for machine learning problems where both outer and inner objectives evolve over time, requiring dynamic updates. Current OBO approaches rely on deterministic extit{window-smoothed} regret minimization, which may not accurately reflect system performance when functions change rapidly. In this work, we introduce a novel search direction and show that both first- and zeroth-order (ZO) stochastic OBO algorithms leveraging this direction achieve sublinear {stochastic bilevel regret without window smoothing}. Beyond these guarantees, our framework enhances efficiency by: (i) reducing oracle dependence in hypergradient estimation, (ii) updating inner and outer variables alongside the linear system solution, and (iii) employing ZO-based estimation of Hessians, Jacobians, and gradients. Experiments on online parametric loss tuning and black-box adversarial attacks validate our approach.