Asymptotic Theory and Sequential Test for General Multi-Armed Bandit Process

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of asymptotic theory for multi-armed bandits (MAB) under non-i.i.d., non-sub-Gaussian, and inter-arm dependent reward sequences that support adaptive resource allocation and sequential hypothesis testing. The authors propose the Urn Bandit (UNB) process, which integrates the reinforcement mechanism of urn models into the MAB framework. They establish, for the first time, a general asymptotic theory for MAB processes, proving that the allocation of resources converges almost surely to the optimal arm. Furthermore, they derive a functional central limit theorem (FCLT) for joint empirical processes, enabling rigorous sequential inference tasks such as A/B testing and arm comparisons. Experiments demonstrate that UNB significantly improves average rewards—approaching those of classical MAB algorithms—while preserving the statistical validity of randomized experimental designs.

Technology Category

Application Category

📝 Abstract
Multi-armed bandit (MAB) processes constitute a foundational subclass of reinforcement learning problems and represent a central topic in statistical decision theory, but are limited to simultaneous adaptive allocation and sequential test, because of the absence of asymptotic theory under non-i.i.d sequence and sublinear information. To address this open challenge, we propose Urn Bandit (UNB) process to integrate the reinforcement mechanism of urn probabilistic models with MAB principles, ensuring almost sure convergence of resource allocation to optimal arms. We establish the joint functional central limit theorem (FCLT) for consistent estimators of expected rewards under non-i.i.d., non-sub-Gaussian and sublinear reward samples with pairwise correlations across arms. To overcome the limitations of existing methods that focus mainly on cumulative regret, we establish the asymptotic theory along with adaptive allocation that serves powerful sequential test, such as arms comparison, A/B testing, and policy valuation. Simulation studies and real data analysis demonstrate that UNB maintains statistical test performance of equal randomization (ER) design but obtain more average rewards like classical MAB processes.
Problem

Research questions and friction points this paper is trying to address.

Multi-armed bandit
asymptotic theory
sequential test
non-i.i.d.
adaptive allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Urn Bandit
functional central limit theorem
non-i.i.d. rewards
sequential testing
adaptive allocation
🔎 Similar Papers
No similar papers found.
L
Li Yang
School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China
Xiaodong Yan
Xiaodong Yan
Unknown affiliation
统计学,机器学习
D
Dandan Jiang
School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China