🤖 AI Summary
This paper addresses training instability, plasticity degradation, and poor scalability of offline reinforcement learning (Offline RL) in continuous control under high update-to-data ratios (UTD > 1). To tackle these issues, we propose Weight-Normalized CrossQ (WN-CrossQ), the first method to integrate weight normalization into the CrossQ architecture. This innovation stabilizes training dynamics, preserves effective learning rate constancy, and eliminates the need for aggressive interventions such as network resets. WN-CrossQ remains fully compatible with standard Offline RL paradigms. We evaluate it across 25 continuous-control tasks from DeepMind Control Suite and MyoSuite—including quadrupedal and humanoid robotic benchmarks—demonstrating substantial improvements in sample efficiency and achieving state-of-the-art performance. Crucially, WN-CrossQ enables robust UTD scaling far beyond 1, simultaneously delivering high empirical performance and strong scalability.
📝 Abstract
Reinforcement learning has achieved significant milestones, but sample efficiency remains a bottleneck for real-world applications. Recently, CrossQ has demonstrated state-of-the-art sample efficiency with a low update-to-data (UTD) ratio of 1. In this work, we explore CrossQ's scaling behavior with higher UTD ratios. We identify challenges in the training dynamics, which are emphasized by higher UTD ratios. To address these, we integrate weight normalization into the CrossQ framework, a solution that stabilizes training, has been shown to prevent potential loss of plasticity and keeps the effective learning rate constant. Our proposed approach reliably scales with increasing UTD ratios, achieving competitive performance across 25 challenging continuous control tasks on the DeepMind Control Suite and Myosuite benchmarks, notably the complex dog and humanoid environments. This work eliminates the need for drastic interventions, such as network resets, and offers a simple yet robust pathway for improving sample efficiency and scalability in model-free reinforcement learning.