🤖 AI Summary
To address inefficient exploration in contextual bandits with large action spaces—where the absence of prior knowledge incurs high statistical and computational costs—this paper introduces diffusion Thompson Sampling (dTS), the first method to incorporate pretrained diffusion models into this setting. dTS leverages diffusion models to implicitly capture reward correlations among actions, thereby constructing a generative Bayesian prior that enables efficient posterior sampling and exploration. We establish a sublinear regret bound for dTS, proving its theoretical soundness. Empirically, dTS significantly outperforms classical baselines—including LinUCB, NeuralUCB, and standard Thompson Sampling—across multiple large-scale action benchmarks, demonstrating both statistical efficacy and computational feasibility. Our core contribution is the pioneering integration of diffusion modeling with Bayesian online decision-making, yielding a scalable, data-efficient exploration paradigm for high-dimensional action spaces.
📝 Abstract
Efficient exploration is a key challenge in contextual bandits due to the large size of their action space, where uninformed exploration can result in computational and statistical inefficiencies. Fortunately, the rewards of actions are often correlated and this can be leveraged to explore them efficiently. In this work, we capture such correlations using pre-trained diffusion models; upon which we design diffusion Thompson sampling (dTS). Both theoretical and algorithmic foundations are developed for dTS, and empirical evaluation also shows its favorable performance.