🤖 AI Summary
In offline reinforcement learning, a single global Q-function struggles to capture the diversity of subtasks in compositional tasks. This paper proposes Contextual Q-Learning (CQL), reformulating Q-function estimation as a context inference problem: a linear Transformer adaptively extracts local Q-functions from historical trajectories without requiring subtask labels or explicit task decomposition. CQL is the first to introduce in-context learning into offline RL, integrating retrieval-augmented prompting with differentiable weight inference to enable dynamic, compositional value modeling. We theoretically establish bounded error for its Q-estimation and prove that the induced policy is near-optimal. Empirically, CQL achieves performance gains of 16.4%, 8.6%, and 6.3% over prior methods on the Kitchen, Gym, and Adroit benchmarks, respectively—demonstrating state-of-the-art results.
📝 Abstract
Accurately estimating the Q-function is a central challenge in offline reinforcement learning. However, existing approaches often rely on a single global Q-function, which struggles to capture the compositional nature of tasks involving diverse subtasks. We propose In-context Compositional Q-Learning ( exttt{ICQL}), the first offline RL framework that formulates Q-learning as a contextual inference problem, using linear Transformers to adaptively infer local Q-functions from retrieved transitions without explicit subtask labels. Theoretically, we show that under two assumptions--linear approximability of the local Q-function and accurate weight inference from retrieved context-- exttt{ICQL} achieves bounded Q-function approximation error, and supports near-optimal policy extraction. Empirically, exttt{ICQL} substantially improves performance in offline settings: improving performance in kitchen tasks by up to 16.4%, and in Gym and Adroit tasks by up to 8.6% and 6.3%. These results highlight the underexplored potential of in-context learning for robust and compositional value estimation, positioning exttt{ICQL} as a principled and effective framework for offline RL.