🤖 AI Summary
Existing dynamic regret bounds for non-stationary online learning under strongly curved losses (e.g., squared or logistic loss) are overly loose, particularly due to suboptimal dimension dependence—previously as high as $d^{10/3}$.
Method: This paper systematically introduces the *mixability* property into non-stationary online learning for the first time, proposing a clean, KKT-free analytical framework. It integrates exponential weighting with fixed-share updates to develop a mixability-driven technique for dynamic regret analysis.
Contribution/Results: We derive a dynamic regret bound of $O(d, T^{1/3} P_T^{2/3} log T)$, where $d$ is the dimension, $T$ the horizon, and $P_T$ the path-length of comparators. Crucially, the linear dependence on $d$ improves upon the prior best $d^{10/3}$, breaking a longstanding bottleneck in convex non-stationary optimization. This establishes a new paradigm for strongly curved, non-stationary online optimization.
📝 Abstract
Non-stationary online learning has drawn much attention in recent years. Despite considerable progress, dynamic regret minimization has primarily focused on convex functions, leaving the functions with stronger curvature (e.g., squared or logistic loss) underexplored. In this work, we address this gap by showing that the regret can be substantially improved by leveraging the concept of mixability, a property that generalizes exp-concavity to effectively capture loss curvature. Let $d$ denote the dimensionality and $P_T$ the path length of comparators that reflects the environmental non-stationarity. We demonstrate that an exponential-weight method with fixed-share updates achieves an $mathcal{O}(d T^{1/3} P_T^{2/3} log T)$ dynamic regret for mixable losses, improving upon the best-known $mathcal{O}(d^{10/3} T^{1/3} P_T^{2/3} log T)$ result (Baby and Wang, 2021) in $d$. More importantly, this improvement arises from a simple yet powerful analytical framework that exploits the mixability, which avoids the Karush-Kuhn-Tucker-based analysis required by existing work.