Asymptotically-Optimal Gaussian Bandits with Side Observations

📅 2025-05-15
🏛️ International Conference on Machine Learning
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the Gaussian bandit problem with general side information: pulling an arm yields a noisy observation of its reward, while simultaneously revealing—via a prior-known side-information matrix—noisy observations of other arms’ rewards; matrix entries quantify the fidelity of such information leakage. The model unifies several special cases, including standard bandits, full feedback, and graph-structured feedback. First, we derive a novel information-theoretic lower bound based on linear programming and prove its asymptotic tightness. Second, we propose the first adaptive sampling algorithm that matches this bound, integrating Gaussian likelihood ratio tests with an information-structure-aware exploration strategy. We establish that the algorithm achieves asymptotically optimal regret, attaining the minimal information-theoretically possible regret across all considered structural settings.

Technology Category

Application Category

📝 Abstract
We study the problem of Gaussian bandits with general side information, as first introduced by Wu, Szepesvari, and Gyorgy. In this setting, the play of an arm reveals information about other arms, according to an arbitrary a priori known side information matrix: each element of this matrix encodes the fidelity of the information that the ``row'' arm reveals about the ``column'' arm. In the case of Gaussian noise, this model subsumes standard bandits, full-feedback, and graph-structured feedback as special cases. In this work, we first construct an LP-based asymptotic instance-dependent lower bound on the regret. The LP optimizes the cost (regret) required to reliably estimate the suboptimality gap of each arm. This LP lower bound motivates our main contribution: the first known asymptotically optimal algorithm for this general setting.
Problem

Research questions and friction points this paper is trying to address.

Study Gaussian bandits with general side observations
Develop LP-based regret lower bound for arm suboptimality
Propose asymptotically optimal algorithm for this setting
Innovation

Methods, ideas, or system contributions that make the work stand out.

LP-based asymptotic instance-dependent lower bound
Asymptotically optimal algorithm design
General side information matrix utilization
🔎 Similar Papers
No similar papers found.
Alexia Atsidakou
Alexia Atsidakou
University of Texas at Austin
Machine Learning
O
O. Papadigenopoulos
Department of Computer Science, University of Texas at Austin
C
C. Caramanis
Department of Electrical and Computer Engineering, University of Texas at Austin
S
S. Sanghavi
Department of Electrical and Computer Engineering, University of Texas at Austin
S
S. Shakkottai
Department of Electrical and Computer Engineering, University of Texas at Austin