Chunk-Guided Q-Learning

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in offline reinforcement learning where single-step temporal difference (TD) methods suffer from bootstrapping error accumulation in long-horizon tasks, while pure action-chunk multi-step approaches mitigate this error at the cost of reduced policy expressivity. To reconcile these trade-offs, the paper proposes Chunk-Guided Q-Learning (CGQ), which introduces, for the first time, an action-chunk guidance mechanism into single-step TD learning. Specifically, CGQ regularizes a fine-grained single-step critic using a multi-step action-chunk critic, thereby preserving precise value propagation while effectively suppressing error accumulation. Theoretically, this approach yields a tighter optimality bound for the critic. Empirically, CGQ significantly outperforms both pure single-step and pure action-chunk baselines on long-horizon tasks in the OGBench benchmark.

Technology Category

Application Category

📝 Abstract
In offline reinforcement learning (RL), single-step temporal-difference (TD) learning can suffer from bootstrapping error accumulation over long horizons. Action-chunked TD methods mitigate this by backing up over multiple steps, but can introduce suboptimality by restricting the policy class to open-loop action sequences. To resolve this trade-off, we present Chunk-Guided Q-Learning (CGQ), a single-step TD algorithm that guides a fine-grained single-step critic by regularizing it toward a chunk-based critic trained using temporally extended backups. This reduces compounding error while preserving fine-grained value propagation. We theoretically show that CGQ attains tighter critic optimality bounds than either single-step or action-chunked TD learning alone. Empirically, CGQ achieves strong performance on challenging long-horizon OGBench tasks, often outperforming both single-step and action-chunked methods.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
temporal-difference learning
bootstrapping error
action-chunked policies
long-horizon tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chunk-Guided Q-Learning
offline reinforcement learning
temporal-difference learning
bootstrapping error
action chunking
G
Gwanwoo Song
Department of Artificial Intelligence, Yonsei University
K
Kwanyoung Park
UC Berkeley
Youngwoon Lee
Youngwoon Lee
Assistant Professor at Yonsei University
Reinforcement learningRobot learning