Improved Dimension Dependence for Bandit Convex Optimization with Gradient Variations

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of excessive dimension dependence in online learning with bandit feedback under varying gradient conditions. By carefully analyzing discontinuous gradient variations and integrating techniques from bandit convex optimization, dynamic regret analysis, and game-theoretic approaches, the paper presents the first improved dimension dependence for both convex and strongly convex functions under two-point feedback. It also establishes the first gradient-variation-based bound for one-point linear bandit optimization. Furthermore, the study provides fast convergence guarantees—based on gradient variation—for dynamic regret, general regret, and bandit games, significantly outperforming existing results across multiple settings while simultaneously achieving favorable theoretical properties such as reduced gradient variance and small-loss bounds.

Technology Category

Application Category

📝 Abstract
Gradient-variation online learning has drawn increasing attention due to its deep connections to game theory, optimization, etc. It has been studied extensively in the full-information setting, but is underexplored with bandit feedback. In this work, we focus on gradient variation in Bandit Convex Optimization (BCO) with two-point feedback. By proposing a refined analysis on the non-consecutive gradient variation, a fundamental quantity in gradient variation with bandits, we improve the dimension dependence for both convex and strongly convex functions compared with the best known results (Chiang et al., 2013). Our improved analysis for the non-consecutive gradient variation also implies other favorable problem-dependent guarantees, such as gradient-variance and small-loss regrets. Beyond the two-point setup, we demonstrate the versatility of our technique by achieving the first gradient-variation bound for one-point bandit linear optimization over hyper-rectangular domains. Finally, we validate the effectiveness of our results in more challenging tasks such as dynamic/universal regret minimization and bandit games, establishing the first gradient-variation dynamic and universal regret bounds for two-point BCO and fast convergence rates in bandit games.
Problem

Research questions and friction points this paper is trying to address.

Bandit Convex Optimization
Gradient Variation
Dimension Dependence
Regret Minimization
Bandit Feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

bandit convex optimization
gradient variation
dimension dependence
two-point feedback
dynamic regret
🔎 Similar Papers
No similar papers found.
H
Hang Yu
National Key Laboratory for Novel Software Technology, Nanjing University, China, School of Artificial Intelligence, Nanjing University, China
Yu-Hu Yan
Yu-Hu Yan
Nanjing University
Machine Learning
Peng Zhao
Peng Zhao
Nanjing University
Online LearningMachine LearningArtificial Intelligence