Change Detection-Based Procedures for Piecewise Stationary MABs: A Modular Approach

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the piecewise-stationary non-stationary multi-armed bandit (MAB) problem, where reward distributions undergo abrupt changes over time but remain stationary between change points. To address the lack of modularity and unified theoretical analysis in existing approaches, we propose a modular framework that decouples change-point detection (CPD) from base bandit algorithms. We establish, for the first time, a unified asymptotic regret analysis paradigm for CPD-bandit compositions, deriving composable and scalable regret upper bounds under sub-Gaussian reward assumptions. Based on this framework, we design a modular CDB (Change-Detection Bandit) policy achieving order-optimal regret. Theoretically, CDB attains the minimax-optimal regret bound under a minimal inter-change-point separation condition. Empirical evaluations demonstrate that CDB significantly outperforms state-of-the-art non-stationary MAB algorithms.

Technology Category

Application Category

📝 Abstract
Conventional Multi-Armed Bandit (MAB) algorithms are designed for stationary environments, where the reward distributions associated with the arms do not change with time. In many applications, however, the environment is more accurately modeled as being nonstationary. In this work, piecewise stationary MAB (PS-MAB) environments are investigated, in which the reward distributions associated with a subset of the arms change at some change-points and remain stationary between change-points. Our focus is on the asymptotic analysis of PS-MABs, for which practical algorithms based on change detection (CD) have been previously proposed. Our goal is to modularize the design and analysis of such CD-based Bandit (CDB) procedures. To this end, we identify the requirements for stationary bandit algorithms and change detectors in a CDB procedure that are needed for the modularization. We assume that the rewards are sub-Gaussian. Under this assumption and a condition on the separation of the change-points, we show that the analysis of CDB procedures can indeed be modularized, so that regret bounds can be obtained in a unified manner for various combinations of change detectors and bandit algorithms. Through this analysis, we develop new modular CDB procedures that are order-optimal. We compare the performance of our modular CDB procedures with various other methods in simulations.
Problem

Research questions and friction points this paper is trying to address.

Multi-Armed Bandits
Segmented Static Environments
Adaptive Strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular Framework
Subgaussian Distribution
Optimized Strategy Design
🔎 Similar Papers
No similar papers found.
Y
Yu-Han Huang
ECE and CSL, The Grainger College of Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801-2332, USA
Argyrios Gerogiannis
Argyrios Gerogiannis
Graduate Student, UIUC
Reinforcement LearningMachine Learning
S
S. Bose
ECE and CSL, The Grainger College of Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801-2332, USA
V
V. Veeravalli
ECE and CSL, The Grainger College of Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801-2332, USA