🤖 AI Summary
In temporal-difference (TD) learning, bootstrapping induces error accumulation, while block-wise critics—though accelerating value backups—require open-loop policy outputs over entire action blocks, compromising reactivity and modeling fidelity. To address this, we propose a decoupled action block length mechanism: the critic employs long blocks to enable multi-step value propagation, whereas the policy retains short blocks for real-time responsiveness. We further introduce optimistic value distillation, transferring high-confidence value estimates from the long-block critic to the short-block policy to mitigate bias and improve optimality. Our method integrates multi-step TD backups, block-structured Q-function modeling, and an offline goal-conditioned RL framework. Evaluated on long-horizon offline goal-directed tasks, it significantly enhances policy adaptability and final performance, consistently outperforming state-of-the-art baselines with improved stability.
📝 Abstract
Temporal-difference (TD) methods learn state and action values efficiently by bootstrapping from their own future value predictions, but such a self-bootstrapping mechanism is prone to bootstrapping bias, where the errors in the value targets accumulate across steps and result in biased value estimates. Recent work has proposed to use chunked critics, which estimate the value of short action sequences ("chunks") rather than individual actions, speeding up value backup. However, extracting policies from chunked critics is challenging: policies must output the entire action chunk open-loop, which can be sub-optimal for environments that require policy reactivity and also challenging to model especially when the chunk length grows. Our key insight is to decouple the chunk length of the critic from that of the policy, allowing the policy to operate over shorter action chunks. We propose a novel algorithm that achieves this by optimizing the policy against a distilled critic for partial action chunks, constructed by optimistically backing up from the original chunked critic to approximate the maximum value achievable when a partial action chunk is extended to a complete one. This design retains the benefits of multi-step value propagation while sidestepping both the open-loop sub-optimality and the difficulty of learning action chunking policies for long action chunks. We evaluate our method on challenging, long-horizon offline goal-conditioned tasks and show that it reliably outperforms prior methods. Code: github.com/ColinQiyangLi/dqc.