🤖 AI Summary
This work addresses the lack of asymptotic theory for multi-armed bandits (MAB) under non-i.i.d., non-sub-Gaussian, and inter-arm dependent reward sequences that support adaptive resource allocation and sequential hypothesis testing. The authors propose the Urn Bandit (UNB) process, which integrates the reinforcement mechanism of urn models into the MAB framework. They establish, for the first time, a general asymptotic theory for MAB processes, proving that the allocation of resources converges almost surely to the optimal arm. Furthermore, they derive a functional central limit theorem (FCLT) for joint empirical processes, enabling rigorous sequential inference tasks such as A/B testing and arm comparisons. Experiments demonstrate that UNB significantly improves average rewards—approaching those of classical MAB algorithms—while preserving the statistical validity of randomized experimental designs.
📝 Abstract
Multi-armed bandit (MAB) processes constitute a foundational subclass of reinforcement learning problems and represent a central topic in statistical decision theory, but are limited to simultaneous adaptive allocation and sequential test, because of the absence of asymptotic theory under non-i.i.d sequence and sublinear information. To address this open challenge, we propose Urn Bandit (UNB) process to integrate the reinforcement mechanism of urn probabilistic models with MAB principles, ensuring almost sure convergence of resource allocation to optimal arms. We establish the joint functional central limit theorem (FCLT) for consistent estimators of expected rewards under non-i.i.d., non-sub-Gaussian and sublinear reward samples with pairwise correlations across arms. To overcome the limitations of existing methods that focus mainly on cumulative regret, we establish the asymptotic theory along with adaptive allocation that serves powerful sequential test, such as arms comparison, A/B testing, and policy valuation. Simulation studies and real data analysis demonstrate that UNB maintains statistical test performance of equal randomization (ER) design but obtain more average rewards like classical MAB processes.