Data-Dependent Regret Bounds for Constrained MABs

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper establishes the first data-dependent regret bound for constrained multi-armed bandits (MAB). We consider an adversarial loss setting with stochastic hard constraints. To address this, we propose a novel algorithm integrating online mirror descent, constraint drift compensation, and adaptive confidence intervals. We rigorously prove that the dynamic regret decomposes into two fundamental terms: “constraint-satisfaction hardness” and “unconstrained learning complexity,” and derive a matching information-theoretic lower bound. Our upper bound is tight—matching this lower bound—and constitutes the first provably optimal data-dependent regret bound for constrained MAB. Notably, when constraints are satisfied with high probability, our bound significantly improves upon the classical $widetilde{mathcal{O}}(sqrt{T})$ guarantee. Furthermore, we extend our framework to soft constraints and introduce new analytical tools for handling stochastic constraint violations.

Technology Category

Application Category

📝 Abstract
This paper initiates the study of data-dependent regret bounds in constrained MAB settings. These bounds depend on the sequence of losses that characterize the problem instance. Thus, they can be much smaller than classical $widetilde{mathcal{O}}(sqrt{T})$ regret bounds, while being equivalent to them in the worst case. Despite this, data-dependent regret bounds have been completely overlooked in constrained MAB settings. The goal of this paper is to answer the following question: Can data-dependent regret bounds be derived in the presence of constraints? We answer this question affirmatively in constrained MABs with adversarial losses and stochastic constraints. Specifically, our main focus is on the most challenging and natural settings with hard constraints, where the learner must ensure that the constraints are always satisfied with high probability. We design an algorithm with a regret bound consisting of two data-dependent terms. The first term captures the difficulty of satisfying the constraints, while the second one encodes the complexity of learning independently of the presence of constraints. We also prove a lower bound showing that these two terms are not artifacts of our specific approach and analysis, but rather the fundamental components that inherently characterize the complexities of the problem. Finally, in designing our algorithm, we also derive some novel results in the related (and easier) soft constraints settings, which may be of independent interest.
Problem

Research questions and friction points this paper is trying to address.

Study data-dependent regret bounds in constrained MABs
Derive regret bounds with adversarial losses and stochastic constraints
Design algorithm ensuring hard constraints with high probability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-dependent regret bounds for constrained MABs
Algorithm with two data-dependent regret terms
Novel results in soft constraints settings
🔎 Similar Papers