🤖 AI Summary
This work addresses the limitations of existing contextual reinforcement learning methods, which typically rely on near-optimal data and struggle to surpass the performance of their training distribution without updating model parameters. To overcome this, the authors propose SPICE, a novel approach that—using only suboptimal pretraining trajectories—achieves regret-optimality for the first time in stochastic bandits and finite-horizon MDPs. SPICE leverages deep ensembles to learn a prior over Q-values and performs Bayesian updates at test time by incorporating contextual information, guided by an upper confidence bound criterion to encourage exploration and enable rapid adaptation. Empirical results demonstrate that SPICE significantly reduces regret across diverse bandit and control tasks, efficiently adapts to unseen tasks, maintains robustness under distributional shifts, and approaches optimal decision-making performance.
📝 Abstract
In-context reinforcement learning (ICRL) promises fast adaptation to unseen environments without parameter updates, but current methods either cannot improve beyond the training distribution or require near-optimal data, limiting practical adoption. We introduce SPICE, a Bayesian ICRL method that learns a prior over Q-values via deep ensemble and updates this prior at test-time using in-context information through Bayesian updates. To recover from poor priors resulting from training on sub-optimal data, our online inference follows an Upper-Confidence Bound rule that favours exploration and adaptation. We prove that SPICE achieves regret-optimal behaviour in both stochastic bandits and finite-horizon MDPs, even when pretrained only on suboptimal trajectories. We validate these findings empirically across bandit and control benchmarks. SPICE achieves near-optimal decisions on unseen tasks, substantially reduces regret compared to prior ICRL and meta-RL approaches while rapidly adapting to unseen tasks and remaining robust under distribution shift.