On Instability of Minimax Optimal Optimism-Based Bandit Algorithms

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of statistical inference in multi-armed bandits (MAB), focusing on the asymptotic normality of optimistic minimax-optimal algorithms. Building upon the Lai–Wei stability criterion, we establish, for the first time, a general structural condition under which optimistic algorithms violate this criterion—revealing a fundamental tension between minimax optimality and statistical stability. We theoretically prove that several prominent minimax-optimal algorithms—including MOSS, Anytime-MOSS, KL-MOSS, ADA-UCB, and KL-UCB++—fail the Lai–Wei stability condition, resulting in sample-mean distributions that significantly deviate from normality. Numerical simulations corroborate their asymptotic non-normality. Our work systematically characterizes the intrinsic mechanism behind statistical failure under optimism and formally establishes that, under the standard MAB setting, simultaneously achieving minimax-optimal regret and asymptotic normality is likely infeasible.

Technology Category

Application Category

📝 Abstract
Statistical inference from data generated by multi-armed bandit (MAB) algorithms is challenging due to their adaptive, non-i.i.d. nature. A classical manifestation is that sample averages of arm rewards under bandit sampling may fail to satisfy a central limit theorem. Lai and Wei's stability condition provides a sufficient, and essentially necessary criterion, for asymptotic normality in bandit problems. While the celebrated Upper Confidence Bound (UCB) algorithm satisfies this stability condition, it is not minimax optimal, raising the question of whether minimax optimality and statistical stability can be achieved simultaneously. In this paper, we analyze the stability properties of a broad class of bandit algorithms that are based on the optimism principle. We establish general structural conditions under which such algorithms violate the Lai-Wei stability criterion. As a consequence, we show that widely used minimax-optimal UCB-style algorithms, including MOSS, Anytime-MOSS, Vanilla-MOSS, ADA-UCB, OC-UCB, KL-MOSS, KL-UCB++, KL-UCB-SWITCH, and Anytime KL-UCB-SWITCH, are unstable. We further complement our theoretical results with numerical simulations demonstrating that, in all these cases, the sample means fail to exhibit asymptotic normality. Overall, our findings suggest a fundamental tension between stability and minimax optimal regret, raising the question of whether it is possible to design bandit algorithms that achieve both. Understanding whether such simultaneously stable and minimax optimal strategies exist remains an important open direction.
Problem

Research questions and friction points this paper is trying to address.

Analyzing stability of minimax optimal optimism-based bandit algorithms
Establishing conditions violating Lai-Wei stability criterion in bandits
Investigating tension between statistical stability and minimax optimal regret
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes stability of optimism-based bandit algorithms
Identifies structural conditions violating Lai-Wei stability criterion
Demonstrates instability in minimax-optimal UCB-style algorithms
🔎 Similar Papers
No similar papers found.