Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the multi-armed bandit (MAB) decision problem under sparse online feedback. To alleviate the online data scarcity bottleneck, it proposes leveraging offline auxiliary data—such as historical user covariates—to construct biased yet informative surrogate rewards. It is the first work to explicitly model highly biased surrogate rewards and, under a joint Gaussian assumption, designs MLA-UCB—a novel UCB-type algorithm that requires no prior knowledge of the covariance structure. Theoretically, MLA-UCB achieves sublinear cumulative regret even when surrogate reward means exhibit substantial bias relative to true rewards. Empirical results demonstrate that, with moderate-scale offline data and moderate correlation between true and surrogate rewards, MLA-UCB significantly outperforms standard UCB, yielding substantial regret reduction. This work advances rigorous modeling and practical deployment of offline–online collaborative learning in sequential decision-making.

Technology Category

Application Category

📝 Abstract
Multi-armed bandit (MAB) is a widely adopted framework for sequential decision-making under uncertainty. Traditional bandit algorithms rely solely on online data, which tends to be scarce as it must be gathered during the online phase when the arms are actively pulled. However, in many practical settings, rich auxiliary data, such as covariates of past users, is available prior to deploying any arms. We introduce a new setting for MAB where pre-trained machine learning (ML) models are applied to convert side information and historical data into emph{surrogate rewards}. A prominent feature of this setting is that the surrogate rewards may exhibit substantial bias, as true reward data is typically unavailable in the offline phase, forcing ML predictions to heavily rely on extrapolation. To address the issue, we propose the Machine Learning-Assisted Upper Confidence Bound (MLA-UCB) algorithm, which can be applied to any reward prediction model and any form of auxiliary data. When the predicted and true rewards are jointly Gaussian, it provably improves the cumulative regret, provided that the correlation is non-zero -- even in cases where the mean surrogate reward completely misaligns with the true mean rewards. Notably, our method requires no prior knowledge of the covariance matrix between true and surrogate rewards. We compare MLA-UCB with the standard UCB on a range of numerical studies and show a sizable efficiency gain even when the size of the offline data and the correlation between predicted and true rewards are moderate.
Problem

Research questions and friction points this paper is trying to address.

Addresses bias in surrogate rewards from ML models
Improves MAB decision-making with offline auxiliary data
Enhances regret performance without prior covariance knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ML-generated surrogate rewards for MAB
Proposes MLA-UCB algorithm for biased predictions
Improves regret without prior covariance knowledge
🔎 Similar Papers
No similar papers found.