đ¤ AI Summary
This paper addresses the multi-task multi-armed bandit problem under partial observability of contexts, where reward dependencies across tasks are governed by latent variables. To tackle the challenge of jointly modeling structural uncertainty (i.e., inter-arm and inter-task dependencies) and user-specific uncertainty (e.g., missing contexts and sparse interactions), we propose a particle-based approximate log-density Gaussian process method. Our approach unifies the modeling of the joint taskâreward distribution, enabling both cross-task observation sharing and personalized inference. Unlike conventional methods, it imposes no prior assumptions on dependency structure, supporting fully data-driven discovery of task relationships. Empirically, the method significantly outperforms hierarchical Bayesian bandit baselinesâparticularly under model misspecification and strong latent heterogeneityâwhile maintaining computational tractability and theoretical coherence.
đ Abstract
We propose a novel Bayesian framework for efficient exploration in contextual multi-task multi-armed bandit settings, where the context is only observed partially and dependencies between reward distributions are induced by latent context variables. In order to exploit these structural dependencies, our approach integrates observations across all tasks and learns a global joint distribution, while still allowing personalised inference for new tasks. In this regard, we identify two key sources of epistemic uncertainty, namely structural uncertainty in the latent reward dependencies across arms and tasks, and user-specific uncertainty due to incomplete context and limited interaction history. To put our method into practice, we represent the joint distribution over tasks and rewards using a particle-based approximation of a log-density Gaussian process. This representation enables flexible, data-driven discovery of both inter-arm and inter-task dependencies without prior assumptions on the latent variables. Empirically, we demonstrate that our method outperforms baselines such as hierarchical model bandits, especially in settings with model misspecification or complex latent heterogeneity.