Design Experiments to Compare Multi-armed Bandit Algorithms

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and deployment latency of evaluating traditional online multi-armed bandit algorithms—such as UCB, Thompson Sampling, and ε-greedy—which typically require numerous independent experimental restarts. To mitigate this, the authors propose Artificial Replay (AR), an experimental design that records the interaction trajectory of a single deployed policy and reuses historical rewards for actions already observed, querying the environment only for previously unobserved actions when evaluating new policies. A theoretical analysis establishes that AR yields unbiased estimators, reduces user interactions to $T + o(T)$, and exhibits linearly growing variance. Empirical evaluations on standard bandit algorithms demonstrate that AR maintains estimation accuracy while nearly halving experimental costs.

Technology Category

Application Category

📝 Abstract
Online platforms routinely compare multi-armed bandit algorithms, such as UCB and Thompson Sampling, to select the best-performing policy. Unlike standard A/B tests for static treatments, each run of a bandit algorithm over $T$ users produces only one dependent trajectory, because the algorithm's decisions depend on all past interactions. Reliable inference therefore demands many independent restarts of the algorithm, making experimentation costly and delaying deployment decisions. We propose Artificial Replay (AR) as a new experimental design for this problem. AR first runs one policy and records its trajectory. When the second policy is executed, it reuses a recorded reward whenever it selects an action the first policy already took, and queries the real environment only otherwise. We develop a new analytical framework for this design and prove three key properties of the resulting estimator: it is unbiased; it requires only $T + o(T)$ user interactions instead of $2T$ for a run of the treatment and control policies, nearly halving the experimental cost when both policies have sub-linear regret; and its variance grows sub-linearly in $T$, whereas the estimator from a na\"ive design has a linearly-growing variance. Numerical experiments with UCB, Thompson Sampling, and $\epsilon$-greedy policies confirm these theoretical gains.
Problem

Research questions and friction points this paper is trying to address.

multi-armed bandit
experimental design
online comparison
dependent trajectory
A/B testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Artificial Replay
Multi-armed Bandits
Experimental Design
Variance Reduction
Unbiased Estimation
🔎 Similar Papers
2024-05-25arXiv.orgCitations: 1
H
Huiling Meng
Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China
Ningyuan Chen
Ningyuan Chen
Department of Management, UTM & Rotman School of Management, University of Toronto
Revenue ManagementOnline LearningOperations ManagementBusiness Analytics
X
Xuefeng Gao
Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China