In-Context Compositional Q-Learning for Offline Reinforcement Learning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, a single global Q-function struggles to capture the diversity of subtasks in compositional tasks. This paper proposes Contextual Q-Learning (CQL), reformulating Q-function estimation as a context inference problem: a linear Transformer adaptively extracts local Q-functions from historical trajectories without requiring subtask labels or explicit task decomposition. CQL is the first to introduce in-context learning into offline RL, integrating retrieval-augmented prompting with differentiable weight inference to enable dynamic, compositional value modeling. We theoretically establish bounded error for its Q-estimation and prove that the induced policy is near-optimal. Empirically, CQL achieves performance gains of 16.4%, 8.6%, and 6.3% over prior methods on the Kitchen, Gym, and Adroit benchmarks, respectively—demonstrating state-of-the-art results.

Technology Category

Application Category

📝 Abstract
Accurately estimating the Q-function is a central challenge in offline reinforcement learning. However, existing approaches often rely on a single global Q-function, which struggles to capture the compositional nature of tasks involving diverse subtasks. We propose In-context Compositional Q-Learning ( exttt{ICQL}), the first offline RL framework that formulates Q-learning as a contextual inference problem, using linear Transformers to adaptively infer local Q-functions from retrieved transitions without explicit subtask labels. Theoretically, we show that under two assumptions--linear approximability of the local Q-function and accurate weight inference from retrieved context-- exttt{ICQL} achieves bounded Q-function approximation error, and supports near-optimal policy extraction. Empirically, exttt{ICQL} substantially improves performance in offline settings: improving performance in kitchen tasks by up to 16.4%, and in Gym and Adroit tasks by up to 8.6% and 6.3%. These results highlight the underexplored potential of in-context learning for robust and compositional value estimation, positioning exttt{ICQL} as a principled and effective framework for offline RL.
Problem

Research questions and friction points this paper is trying to address.

Addresses inaccurate Q-function estimation in offline reinforcement learning
Solves limitations of single global Q-function for compositional tasks
Enables adaptive local Q-function inference without subtask labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses linear Transformers for contextual Q-learning
Infers local Q-functions from retrieved transitions
Achieves bounded error without subtask labels
🔎 Similar Papers
No similar papers found.
Q
Qiushui Xu
Penn State University
Yuhao Huang
Yuhao Huang
Shenzhen University
Medical Image ComputingUltrasoundModel Robustness
Y
Yushu Jiang
University of Toronto
L
Lei Song
Microsoft Research
J
Jinyu Wang
Microsoft Research
W
Wenliang Zheng
Penn State University
J
Jiang Bian
Microsoft Research