What Makes Value Learning Efficient in Residual Reinforcement Learning?

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Value learning in residual reinforcement learning is often hindered by cold-start and structural scale mismatch issues, leading to poor sample efficiency. This work uncovers the underlying mechanism behind this inefficiency and introduces DAWN, a simple yet effective approach that leverages data from a base policy to provide implicit value anchoring—thereby achieving implicit warm-up—and employs critic normalization to restore representational sensitivity. Requiring only base-policy data and normalization operations, DAWN demonstrates remarkable improvements in value learning efficiency across diverse benchmark tasks, policy architectures, and observation modalities, highlighting its generality and effectiveness.

Technology Category

Application Category

📝 Abstract
Residual reinforcement learning (RL) enables stable online refinement of expressive pretrained policies by freezing the base and learning only bounded corrections. However, value learning in residual RL poses unique challenges that remain poorly understood. In this work, we identify two key bottlenecks: cold start pathology, where the critic lacks knowledge of the value landscape around the base policy, and structural scale mismatch, where the residual contribution is dwarfed by the base action. Through systematic investigation, we uncover the mechanisms underlying these bottlenecks, revealing that simple yet principled solutions suffice: base-policy transitions serve as an essential value anchor for implicit warmup, and critic normalization effectively restores representation sensitivity for discerning value differences. Based on these insights, we propose DAWN (Data-Anchored Warmup and Normalization), a minimal approach targeting efficient value learning in residual RL. By addressing these bottlenecks, DAWN demonstrates substantial efficiency gains across diverse benchmarks, policy architectures, and observation modalities.
Problem

Research questions and friction points this paper is trying to address.

residual reinforcement learning
value learning
cold start pathology
structural scale mismatch
critic
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Reinforcement Learning
Value Learning
Cold Start Pathology
Critic Normalization
DAWN
🔎 Similar Papers
No similar papers found.