🤖 AI Summary
This work addresses the multi-armed bandit problem under action stochastic availability and side observations, where selecting an arm yields additional information about other arms according to a known graph structure. The authors propose the UCB-LP-A policy, which, to the best of our knowledge, is the first to incorporate linear programming into this setting. By modeling observation dependencies via a graph and integrating it within an upper confidence bound (UCB) framework, the method dynamically computes an optimal sampling distribution over the available arms at each round to balance exploration and exploitation. Theoretical analysis establishes a regret upper bound for the proposed strategy, and empirical evaluations demonstrate its significant superiority over existing algorithms that either ignore side observations or disregard availability constraints.
📝 Abstract
We study the stochastic multi-armed bandit (MAB) problem where an underlying network structure enables side-observations across related actions. We use a bipartite graph to link actions to a set of unknowns, such that selecting an action reveals observations for all the unknowns it is connected to. While previous works rely on the assumption that all actions are permanently accessible, we investigate the more practical setting of stochastic availability, where the set of feasible actions (the "activation set") varies dynamically in each round. This framework models real-world systems with both structural dependencies and volatility, such as social networks where users provide side-information about their peers' preferences, yet are not always online to be queried. To address this challenge, we propose UCB-LP-A, a novel policy that leverages a Linear Programming (LP) approach to optimize exploration-exploitation trade-offs under stochastic availability. Unlike standard network bandit algorithms that assume constant access, UCB-LP-A computes an optimal sampling distribution over the realizable activation sets, ensuring that the necessary observations are gathered using only the currently active arms. We derive a theoretical upper bound on the regret of our policy, characterizing the impact of both the network structure and the activation probabilities. Finally, we demonstrate through numerical simulations that UCB-LP-A significantly outperforms existing heuristics that ignore either the side-information or the availability constraints.