Truly Adapting to Adversarial Constraints in Constrained MABs

πŸ“… 2026-02-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the multi-armed bandit problem in non-stationary environments where losses vary arbitrarily and constraints exhibit varying degrees of adversariality. The authors propose an adaptive algorithm that jointly optimizes regret and constraint violation under both full-information and bandit feedback settings. The key innovation lies in achieving, for the first time, optimal regret of $\widetilde{O}(\sqrt{T} + C)$ with a positive constraint violation rate when constraints are stochastic and losses are adversarial. Furthermore, when constraints are observed only via bandit feedback, the algorithm attains $\widetilde{O}(\sqrt{T} + C)$ constraint violation and $\widetilde{O}(\sqrt{T} + C\sqrt{T})$ regret. Notably, the algorithm’s performance degrades smoothly as the adversarial nature of the constraints increases, providing a unified framework that seamlessly handles multiple feedback scenarios.

Technology Category

Application Category

πŸ“ Abstract
We study the constrained variant of the \emph{multi-armed bandit} (MAB) problem, in which the learner aims not only at minimizing the total loss incurred during the learning dynamic, but also at controlling the violation of multiple \emph{unknown} constraints, under both \emph{full} and \emph{bandit feedback}. We consider a non-stationary environment that subsumes both stochastic and adversarial models and where, at each round, both losses and constraints are drawn from distributions that may change arbitrarily over time. In such a setting, it is provably not possible to guarantee both sublinear regret and sublinear violation. Accordingly, prior work has mainly focused either on settings with stochastic constraints or on relaxing the benchmark with fully adversarial constraints (\emph{e.g.}, via competitive ratios with respect to the optimum). We provide the first algorithms that achieve optimal rates of regret and \emph{positive} constraint violation when the constraints are stochastic while the losses may vary arbitrarily, and that simultaneously yield guarantees that degrade smoothly with the degree of adversariality of the constraints. Specifically, under \emph{full feedback} we propose an algorithm attaining $\widetilde{\mathcal{O}}(\sqrt{T}+C)$ regret and $\widetilde{\mathcal{O}}(\sqrt{T}+C)$ {positive} violation, where $C$ quantifies the amount of non-stationarity in the constraints. We then show how to extend these guarantees when only bandit feedback is available for the losses. Finally, when \emph{bandit feedback} is available for the constraints, we design an algorithm achieving $\widetilde{\mathcal{O}}(\sqrt{T}+C)$ {positive} violation and $\widetilde{\mathcal{O}}(\sqrt{T}+C\sqrt{T})$ regret.
Problem

Research questions and friction points this paper is trying to address.

constrained multi-armed bandit
adversarial constraints
non-stationary environment
regret minimization
constraint violation
Innovation

Methods, ideas, or system contributions that make the work stand out.

constrained multi-armed bandits
adversarial constraints
non-stationary environments
adaptive regret
positive constraint violation
πŸ”Ž Similar Papers
No similar papers found.