🤖 AI Summary
This work addresses the challenge of excessive dimension dependence in online learning with bandit feedback under varying gradient conditions. By carefully analyzing discontinuous gradient variations and integrating techniques from bandit convex optimization, dynamic regret analysis, and game-theoretic approaches, the paper presents the first improved dimension dependence for both convex and strongly convex functions under two-point feedback. It also establishes the first gradient-variation-based bound for one-point linear bandit optimization. Furthermore, the study provides fast convergence guarantees—based on gradient variation—for dynamic regret, general regret, and bandit games, significantly outperforming existing results across multiple settings while simultaneously achieving favorable theoretical properties such as reduced gradient variance and small-loss bounds.
📝 Abstract
Gradient-variation online learning has drawn increasing attention due to its deep connections to game theory, optimization, etc. It has been studied extensively in the full-information setting, but is underexplored with bandit feedback. In this work, we focus on gradient variation in Bandit Convex Optimization (BCO) with two-point feedback. By proposing a refined analysis on the non-consecutive gradient variation, a fundamental quantity in gradient variation with bandits, we improve the dimension dependence for both convex and strongly convex functions compared with the best known results (Chiang et al., 2013). Our improved analysis for the non-consecutive gradient variation also implies other favorable problem-dependent guarantees, such as gradient-variance and small-loss regrets. Beyond the two-point setup, we demonstrate the versatility of our technique by achieving the first gradient-variation bound for one-point bandit linear optimization over hyper-rectangular domains. Finally, we validate the effectiveness of our results in more challenging tasks such as dynamic/universal regret minimization and bandit games, establishing the first gradient-variation dynamic and universal regret bounds for two-point BCO and fast convergence rates in bandit games.