Stochastic Multi-Armed Bandits with Limited Control Variates

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the stochastic multi-armed bandit problem in settings where auxiliary information—such as control variables—is available only intermittently across rounds, a scenario in which existing methods struggle to effectively leverage such information to reduce cumulative regret. The authors propose UCB-LCV, an algorithm that, for the first time, adapts to partial availability of control variables by constructing an upper confidence bound that dynamically integrates reward observations with control variable estimates. In the absence of control variables, UCB-LCV naturally reduces to UCB-NORMAL, a variant tailored for standard normal rewards. Theoretical analysis yields a generalization applicable to broader reward distributions, and empirical evaluations demonstrate that UCB-LCV significantly outperforms existing approaches under limited auxiliary information, while UCB-NORMAL also exhibits competitive performance in conventional settings.

Technology Category

Application Category

📝 Abstract
Motivated by wireless networks where interference or channel state estimates provide partial insight into throughput, we study a variant of the classical stochastic multi-armed bandit problem in which the learner has limited access to auxiliary information. Recent work has shown that such auxiliary information, when available as control variates, can be used to get tighter confidence bounds, leading to lower regret. However, existing works assume that control variates are available in every round, which may not be realistic in several real-life scenarios. To address this, we propose UCB-LCV, an upper confidence bound (UCB) based algorithm that effectively combines the estimators obtained from rewards and control variates. When there is no control variate, UCB-LCV leads to a novel algorithm that we call UCB-NORMAL, outperforming its existing algorithms for the standard MAB setting with normally distributed rewards. Finally, we discuss variants of the proposed UCB-LCV that apply to general distributions and experimentally demonstrate that UCB-LCV outperforms existing bandit algorithms.
Problem

Research questions and friction points this paper is trying to address.

stochastic multi-armed bandits
limited control variates
auxiliary information
regret minimization
wireless networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic Multi-Armed Bandits
Control Variates
Upper Confidence Bound
Limited Auxiliary Information
Regret Minimization
🔎 Similar Papers
2024-10-02International Conference on Machine LearningCitations: 1
2024-02-05arXiv.orgCitations: 1