🤖 AI Summary
This paper investigates the non-stationary Restless Multi-Armed Bandit (RMAB) problem under the average-reward criterion. To address the failure of Whittle’s Index Policy (WIP) in degenerate scenarios and its high memory overhead, we propose the Lagrangian Index Policy (LIP)—the first index policy grounded in Lagrangian duality and exchangeability analysis. Leveraging the de Finetti theorem, we establish its asymptotic optimality in the homogeneous-arm limit. LIP supports online learning under model uncertainty and admits unified implementation via tabular Q-learning or neural-network-based RL, reducing memory consumption by an order of magnitude. For restart-type models—including web crawling and weighted Age-of-Information minimization—we derive closed-form LIP indices analytically. Experiments demonstrate that LIP maintains high robustness and near-optimality even when WIP collapses, significantly outperforming existing approaches in challenging non-stationary regimes.
📝 Abstract
We study the Lagrangian Index Policy (LIP) for restless multi-armed bandits with long-run average reward. In particular, we compare the performance of LIP with the performance of the Whittle Index Policy (WIP), both heuristic policies known to be asymptotically optimal under certain natural conditions. Even though in most cases their performances are very similar, in the cases when WIP shows bad performance, LIP continues to perform very well. We then propose reinforcement learning algorithms, both tabular and NN-based, to obtain online learning schemes for LIP in the model-free setting. The proposed reinforcement learning schemes for LIP requires significantly less memory than the analogous scheme for WIP. We calculate analytically the Lagrangian index for the restart model, which describes the optimal web crawling and the minimization of the weighted age of information. We also give a new proof of asymptotic optimality in case of homogeneous bandits as the number of arms goes to infinity, based on exchangeability and de Finetti's theorem.