🤖 AI Summary
This paper studies the online linear contextual bandit problem under partial context observability and proposes PULSE-UCB, the first algorithm to leverage surrogate features from large-scale pretrained models for missing-feature imputation. The method integrates pretrained-feature imputation, UCB-style online decision-making, and error analysis under a Hölder smoothness assumption. Theoretical contributions include: (1) a precise decomposition of the regret bound into a standard linear bandit term and an additional term governed by pretrained model quality; (2) attainment of a near-optimal regret upper bound under i.i.d. contexts, accompanied by a matching lower bound; and (3) quantification of how prediction uncertainty impacts decision performance, explicitly characterizing the auxiliary data scale required to improve downstream learning. Experiments demonstrate the efficacy of pretrained priors in nonstationary, partially observable settings.
📝 Abstract
The rise of large-scale pretrained models has made it feasible to generate predictive or synthetic features at low cost, raising the question of how to incorporate such surrogate predictions into downstream decision-making. We study this problem in the setting of online linear contextual bandits, where contexts may be complex, nonstationary, and only partially observed. In addition to bandit data, we assume access to an auxiliary dataset containing fully observed contexts--common in practice since such data are collected without adaptive interventions. We propose PULSE-UCB, an algorithm that leverages pretrained models trained on the auxiliary data to impute missing features during online decision-making. We establish regret guarantees that decompose into a standard bandit term plus an additional component reflecting pretrained model quality. In the i.i.d. context case with Hölder-smooth missing features, PULSE-UCB achieves near-optimal performance, supported by matching lower bounds. Our results quantify how uncertainty in predicted contexts affects decision quality and how much historical data is needed to improve downstream learning.