In-Context Reinforcement Learning through Bayesian Fusion of Context and Value Prior

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing contextual reinforcement learning methods, which typically rely on near-optimal data and struggle to surpass the performance of their training distribution without updating model parameters. To overcome this, the authors propose SPICE, a novel approach that—using only suboptimal pretraining trajectories—achieves regret-optimality for the first time in stochastic bandits and finite-horizon MDPs. SPICE leverages deep ensembles to learn a prior over Q-values and performs Bayesian updates at test time by incorporating contextual information, guided by an upper confidence bound criterion to encourage exploration and enable rapid adaptation. Empirical results demonstrate that SPICE significantly reduces regret across diverse bandit and control tasks, efficiently adapts to unseen tasks, maintains robustness under distributional shifts, and approaches optimal decision-making performance.

Technology Category

Application Category

📝 Abstract
In-context reinforcement learning (ICRL) promises fast adaptation to unseen environments without parameter updates, but current methods either cannot improve beyond the training distribution or require near-optimal data, limiting practical adoption. We introduce SPICE, a Bayesian ICRL method that learns a prior over Q-values via deep ensemble and updates this prior at test-time using in-context information through Bayesian updates. To recover from poor priors resulting from training on sub-optimal data, our online inference follows an Upper-Confidence Bound rule that favours exploration and adaptation. We prove that SPICE achieves regret-optimal behaviour in both stochastic bandits and finite-horizon MDPs, even when pretrained only on suboptimal trajectories. We validate these findings empirically across bandit and control benchmarks. SPICE achieves near-optimal decisions on unseen tasks, substantially reduces regret compared to prior ICRL and meta-RL approaches while rapidly adapting to unseen tasks and remaining robust under distribution shift.
Problem

Research questions and friction points this paper is trying to address.

In-Context Reinforcement Learning
suboptimal data
distribution shift
fast adaptation
regret minimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-Context Reinforcement Learning
Bayesian Fusion
Q-value Prior
Upper-Confidence Bound
Regret-Optimal
🔎 Similar Papers
A
Anaïs Berkes
University of Cambridge, Mila - Quebec AI Institute
V
Vincent Taboga
Mila - Quebec AI Institute, Université de Montréal
D
Donna Vakalis
Mila - Quebec AI Institute, Université de Montréal
David Rolnick
David Rolnick
McGill University, Mila Quebec AI Institute
Machine LearningClimate ChangeBiodiversityDeep Learning Theory
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence