Contextual Bandits with Stage-wise Constraints

📅 2024-01-15
🏛️ arXiv.org
📈 Citations: 9
Influential: 1
📄 PDF
🤖 AI Summary
This paper studies contextual bandits with stage-wise constraints, requiring each decision to satisfy constraints simultaneously under both high-probability and expectation-based feasibility criteria, while maximizing cumulative reward and ensuring real-time constraint satisfaction. We propose, for the first time, a differentiated scaling mechanism that models confidence set radii separately for rewards and costs. A unified framework is developed to handle single or multiple constraints—whether linear or nonlinear in structure. Leveraging UCB-style exploration, eluder dimension analysis, and joint dual-constraint modeling, we establish an optimal $ ilde{O}(sqrt{T})$ regret bound and provide matching upper and lower bounds. The theoretical analysis is rigorous, and empirical simulations consistently validate the theoretical guarantees. The algorithm is scalable and applicable to complex, nonlinear constraint settings.

Technology Category

Application Category

📝 Abstract
We study contextual bandits in the presence of a stage-wise constraint (a constraint at each round), when the constraint must be satisfied both with high probability and in expectation. Obviously the setting where the constraint is in expectation is a relaxation of the one with high probability. We start with the linear case where both the contextual bandit problem (reward function) and the stage-wise constraint (cost function) are linear. In each of the high probability and in expectation settings, we propose an upper-confidence bound algorithm for the problem and prove a $T$-round regret bound for it. Our algorithms balance exploration and constraint satisfaction using a novel idea that scales the radii of the reward and cost confidence sets with different scaling factors. We also prove a lower-bound for this constrained problem, show how our algorithms and analyses can be extended to multiple constraints, and provide simulations to validate our theoretical results. In the high probability setting, we describe the minimum requirements for the action set in order for our algorithm to be tractable. In the setting that the constraint is in expectation, we further specialize our results to multi-armed bandits and propose a computationally efficient algorithm for this setting with regret analysis. Finally, we extend our results to the case where the reward and cost functions are both non-linear. We propose an algorithm for this case and prove a regret bound for it that characterize the function class complexity by the eluder dimension.
Problem

Research questions and friction points this paper is trying to address.

Addressing contextual bandits with stage-wise constraints
Ensuring constraints are satisfied both probabilistically and expectedly
Extending solutions from linear to non-linear reward-cost functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Upper-confidence bound algorithm for constraints
Regret bound analysis for multiple constraints
Eluder dimension for non-linear function complexity