Co-Exploration and Co-Exploitation via Shared Structure in Multi-Task Bandits

📅 2025-12-14
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the multi-task multi-armed bandit problem under partial observability of contexts, where reward dependencies across tasks are governed by latent variables. To tackle the challenge of jointly modeling structural uncertainty (i.e., inter-arm and inter-task dependencies) and user-specific uncertainty (e.g., missing contexts and sparse interactions), we propose a particle-based approximate log-density Gaussian process method. Our approach unifies the modeling of the joint task–reward distribution, enabling both cross-task observation sharing and personalized inference. Unlike conventional methods, it imposes no prior assumptions on dependency structure, supporting fully data-driven discovery of task relationships. Empirically, the method significantly outperforms hierarchical Bayesian bandit baselines—particularly under model misspecification and strong latent heterogeneity—while maintaining computational tractability and theoretical coherence.

Technology Category

Application Category

📝 Abstract
We propose a novel Bayesian framework for efficient exploration in contextual multi-task multi-armed bandit settings, where the context is only observed partially and dependencies between reward distributions are induced by latent context variables. In order to exploit these structural dependencies, our approach integrates observations across all tasks and learns a global joint distribution, while still allowing personalised inference for new tasks. In this regard, we identify two key sources of epistemic uncertainty, namely structural uncertainty in the latent reward dependencies across arms and tasks, and user-specific uncertainty due to incomplete context and limited interaction history. To put our method into practice, we represent the joint distribution over tasks and rewards using a particle-based approximation of a log-density Gaussian process. This representation enables flexible, data-driven discovery of both inter-arm and inter-task dependencies without prior assumptions on the latent variables. Empirically, we demonstrate that our method outperforms baselines such as hierarchical model bandits, especially in settings with model misspecification or complex latent heterogeneity.
Problem

Research questions and friction points this paper is trying to address.

Addresses efficient exploration in multi-task bandits with partial context observation.
Exploits latent structural dependencies across tasks for personalized inference.
Handles epistemic uncertainty from incomplete context and limited interaction history.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian framework for multi-task bandit exploration
Particle-based Gaussian process for joint distribution modeling
Exploits latent dependencies across tasks and arms
S
Sumantrak Mukherjee
Department of Data Science and its Applications, German Research Centre for Artificial Intelligence (DFKI), Germany
S
Serafima Lebedeva
Department of Computer Science, University of Kaiserslautern–Landau (RPTU), Germany
V
Valentin Margraf
Institute of Informatics, University of Munich (LMU), Germany
J
Jonas Hanselle
Institute of Informatics, University of Munich (LMU), Germany
K
Kanta Yamaoka
Department of Computer Science, University of Kaiserslautern–Landau (RPTU), Germany
Viktor Bengs
Viktor Bengs
German Research Center for Artificial Intelligence (DFKI)
Bandit algorithmsPreference learningUncertainty QuantificationAlgorithm Configuration
S
Stefan Konigorski
Digital Health – Machine Learning Research Group, Hasso Plattner Institute for Digital Engineering, Potsdam, Germany
Eyke HĂźllermeier
Eyke HĂźllermeier
Professor of Computer Science, Paderborn University
Artificial IntelligenceMachine LearningFuzzy LogicBioinformatics
S
Sebastian Josef Vollmer
Department of Computer Science, University of Kaiserslautern–Landau (RPTU), Germany