🤖 AI Summary
This paper addresses sequential decision-making in non-stationary multi-armed bandits (NS-MAB) with time-varying, action-independent rewards, bridging a theoretical gap in Thompson sampling (TS) under non-stationarity. We propose two sliding-window TS algorithms: BETA-SWTS, based on Beta posterior inference, and γ-SWGTS, based on Gaussian posterior inference. Crucially, we develop the first unified regret analysis framework applicable to arbitrary non-stationary environments—including both abrupt changes and gradual drifts. Our framework introduces a novel error metric that rectifies fundamental limitations of prior analyses, enabling the first explicit, tight regret bounds for both algorithms across these two dynamic regimes. Theoretical analysis proves their superiority over classical approaches such as SW-UCB. Empirical evaluations confirm their robustness and faster convergence across diverse non-stationary benchmarks.
📝 Abstract
$ extit{Restless Bandits}$ describe sequential decision-making problems in which the rewards evolve with time independently from the actions taken by the policy-maker. It has been shown that classical Bandit algorithms fail when the underlying environment is changing, making clear that in order to tackle more challenging scenarios specifically crafted algorithms are needed. In this paper, extending and correcting the work by cite{trovo2020sliding}, we analyze two Thompson-Sampling inspired algorithms, namely $ exttt{BETA-SWTS}$ and $ exttt{$gamma$-SWGTS}$, introduced to face the additional complexity given by the non-stationary nature of the settings; in particular we derive a general formulation for the regret in $ extit{any}$ arbitrary restless environment for both Bernoulli and Subgaussian rewards, and, through the introduction of new quantities, we delve in what contribution lays the deeper foundations of the error made by the algorithms. Finally, we infer from the general formulation the regret for two of the most common non-stationary settings: the $ extit{Abruptly Changing}$ and the $ extit{Smoothly Changing}$ environments.