Bandit Convex Optimisation

📅 2024-02-09
🏛️ arXiv.org
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
This paper studies constrained bandit convex optimization (i.e., zero-order convex optimization), where online decisions are made using only function value feedback. To address varying regularity assumptions—such as strong convexity and smoothness—as well as diverse feasible set geometries, we propose a novel algorithmic framework integrating cutting-plane methods, interior-point techniques, and continuous exponential weights. Crucially, our approach circumvents explicit gradient estimation, thereby substantially reducing query complexity and cumulative regret. Theoretical analysis establishes tighter regret upper bounds across multiple settings—for instance, improving from $O(T^{3/4})$ to $O(T^{2/3})$—with particularly pronounced gains in high-dimensional, nonsmooth, or non-strongly-convex regimes. These results advance the theoretical efficiency and practical applicability of zero-order optimization algorithms and introduce a new analytical paradigm for gradient-free online learning under structured (e.g., convex body) constraints.

Technology Category

Application Category

📝 Abstract
Bandit convex optimisation is a fundamental framework for studying zeroth-order convex optimisation. These notes cover the many tools used for this problem, including cutting plane methods, interior point methods, continuous exponential weights, gradient descent and online Newton step. The nuances between the many assumptions and setups are explained. Although there is not much truly new here, some existing tools are applied in novel ways to obtain new algorithms. A few bounds are improved in minor ways.
Problem

Research questions and friction points this paper is trying to address.

Studying zeroth-order convex optimisation problems
Explaining nuances between assumptions and setups
Applying existing tools for new algorithm development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cutting plane methods for convex optimisation
Online Newton step algorithm improvements
Novel application of continuous exponential weights
🔎 Similar Papers
No similar papers found.