🤖 AI Summary
This paper studies the Gaussian bandit problem with general side information: pulling an arm yields a noisy observation of its reward, while simultaneously revealing—via a prior-known side-information matrix—noisy observations of other arms’ rewards; matrix entries quantify the fidelity of such information leakage. The model unifies several special cases, including standard bandits, full feedback, and graph-structured feedback. First, we derive a novel information-theoretic lower bound based on linear programming and prove its asymptotic tightness. Second, we propose the first adaptive sampling algorithm that matches this bound, integrating Gaussian likelihood ratio tests with an information-structure-aware exploration strategy. We establish that the algorithm achieves asymptotically optimal regret, attaining the minimal information-theoretically possible regret across all considered structural settings.
📝 Abstract
We study the problem of Gaussian bandits with general side information, as first introduced by Wu, Szepesvari, and Gyorgy. In this setting, the play of an arm reveals information about other arms, according to an arbitrary a priori known side information matrix: each element of this matrix encodes the fidelity of the information that the ``row'' arm reveals about the ``column'' arm. In the case of Gaussian noise, this model subsumes standard bandits, full-feedback, and graph-structured feedback as special cases. In this work, we first construct an LP-based asymptotic instance-dependent lower bound on the regret. The LP optimizes the cost (regret) required to reliably estimate the suboptimality gap of each arm. This LP lower bound motivates our main contribution: the first known asymptotically optimal algorithm for this general setting.